CN112866629B - Control method and terminal for binocular vision application - Google Patents

Control method and terminal for binocular vision application Download PDF

Info

Publication number
CN112866629B
CN112866629B CN201911197375.0A CN201911197375A CN112866629B CN 112866629 B CN112866629 B CN 112866629B CN 201911197375 A CN201911197375 A CN 201911197375A CN 112866629 B CN112866629 B CN 112866629B
Authority
CN
China
Prior art keywords
terminal
camera
vehicle
image
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911197375.0A
Other languages
Chinese (zh)
Other versions
CN112866629A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Tatfook Technology Co Ltd
Original Assignee
Anhui Tatfook Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Tatfook Technology Co Ltd filed Critical Anhui Tatfook Technology Co Ltd
Priority to CN201911197375.0A priority Critical patent/CN112866629B/en
Publication of CN112866629A publication Critical patent/CN112866629A/en
Application granted granted Critical
Publication of CN112866629B publication Critical patent/CN112866629B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a control method and a terminal for binocular vision application, wherein the terminal is connected with a vehicle-mounted camera of a target vehicle, when the positions of the terminal camera and the vehicle-mounted camera of the terminal are first relative positions, the terminal acquires the current first relative position relation between the terminal camera and the vehicle-mounted camera of the terminal according to a preset first rule, the first relative position is a position when the terminal camera and the vehicle-mounted camera are not overlapped, the terminal acquires a first image of a target object and a second image of the target object, the first image is an image acquired by the terminal camera, the second image is an image acquired by the vehicle-mounted camera, the first image and the second image are acquired at the same moment, and the terminal determines target depth information of the target object corresponding to the first image, the second image and the first relative position relation according to a preset second rule.

Description

Control method and terminal for binocular vision application
Technical Field
The embodiment of the application relates to the technical field of vision, in particular to a control method and a terminal for binocular vision application.
Background
The advanced driving assistance system (ADVANCED DRIVER ASSISTANT SYSTEMS, ADAS) is to use various sensors mounted on a vehicle to sense the surrounding environment at any time during the running of the vehicle, collect data, identify, detect and track static and dynamic objects, and combine with navigator map data to perform systematic calculation and analysis, thereby enabling the driver to perceive possible danger and the like in advance. Along with the application and popularization of the ADAS, when the ADAS identifies, detects and tracks static and dynamic objects, depth information in a three-dimensional space of a target object is obtained, so that analysis and judgment results of the ADAS are more accurate.
In the prior art, ADAS generally obtains depth information of a target object through a high-precision laser radar, the working principle of the laser radar is that laser is taken as a signal source, pulse laser emitted by a laser is transmitted to the target object to cause scattering, a part of light waves are reflected to a receiver of the laser radar, the distance from the laser radar to a target point is obtained through calculation according to a laser ranging principle, the pulse laser continuously scans the target object, data of all the target points on the target object can be obtained, and an accurate three-dimensional image can be obtained after imaging processing is carried out by using the data.
The high-precision laser radar is used as a precise optical instrument, a large amount of manpower time is required to be used for debugging and correcting in the production process, the debugging and correcting time consumed by the increase of the radar wire harness is increased in geometric progression, and in addition, the laser radar is used as an active environment sensing method, and interference phenomenon in the detection process of a plurality of laser radars can exist.
Disclosure of Invention
The embodiment of the application provides a control method and a terminal for binocular vision application, wherein the terminal determines target depth information of a target object by using a binocular vision method, compared with the prior art which uses a high-precision laser radar, a large amount of debugging and correcting time is reduced, and the binocular vision adopts a passive mode for ranging, so that interference phenomenon in the detection process is reduced.
A first aspect of an embodiment of the present application provides a control method for binocular vision application, including:
the terminal is connected with a vehicle-mounted camera of a target vehicle;
when the positions of the terminal camera and the vehicle-mounted camera are first relative positions, the terminal obtains the current first relative position relation between the terminal camera and the vehicle-mounted camera according to a preset first rule, and the first relative position is a position when the terminal camera and the vehicle-mounted camera are not overlapped;
The terminal acquires a first image of a target object and a second image of the target object, wherein the first image is an image acquired by a camera of the terminal, the second image is an image acquired by a camera of the vehicle-mounted camera, and the first image and the second image are acquired at the same moment;
And the terminal determines target depth information of the target object corresponding to the first image, the second image and the first relative position relation according to a preset second rule.
Optionally, before the terminal obtains the current first relative position relationship between the terminal camera and the vehicle-mounted camera according to a preset first rule, the method further includes:
When the positions of the terminal camera and the vehicle-mounted camera are preset initial calibration positions, the terminal determines a first internal parameter matrix and a first distortion coefficient matrix of the terminal camera according to a target algorithm;
The terminal determines a second internal parameter matrix and a second distortion coefficient matrix of the vehicle-mounted camera according to the target algorithm;
And the terminal determines an initial translation vector in the initial relative position relation of the terminal camera and the vehicle-mounted camera according to the target algorithm.
Optionally, the objective algorithm comprises an open source computer vision function library.
Optionally, the first relative positional relationship includes a first rotation matrix and a first translation vector;
the terminal obtaining the current first relative position relation between the terminal camera and the vehicle-mounted camera according to a preset first rule comprises the following steps:
the terminal determines the initial translation vector as the first translation vector;
The terminal obtains a current first reading of an accelerometer of the terminal;
the terminal obtains a current second reading of an accelerometer of the target vehicle;
the terminal acquires a first vanishing point position of a current field of view of the terminal camera;
the terminal acquires a second vanishing point position of the current field of view of the vehicle-mounted camera;
And the terminal determines the first rotation matrix corresponding to the first reading, the second reading, the first vanishing point position and the second vanishing point position according to a preset third rule.
Optionally, the first relative positional relationship includes a first rotation matrix and a first translation vector;
Optionally, the obtaining, by the terminal, the current first relative positional relationship between the terminal camera and the vehicle-mounted camera according to a preset first rule includes:
the terminal determines the initial translation vector as the first translation vector;
the terminal obtains a current first accelerometer reading of an accelerometer of the terminal;
The terminal obtains a current first gyroscope reading of a gyroscope of the terminal;
The terminal obtains a current second accelerometer reading of an accelerometer of the target vehicle;
the terminal obtains a current second gyroscope reading of a gyroscope of the target vehicle;
the terminal determines the first rotation matrix from the first accelerometer reading, the first gyroscope reading, the second accelerometer reading, and the second gyroscope reading.
Optionally, before the terminal obtains the first relative position relationship between the terminal camera and the vehicle-mounted camera according to a preset first rule, the method further includes:
The terminal determines an initial rotation matrix in the initial relative position relation between the terminal camera and the vehicle-mounted camera according to the target algorithm;
the terminal obtaining the current first relative position relation between the terminal camera and the vehicle-mounted camera according to a preset first rule comprises the following steps:
the terminal determines the initial translation vector as the first translation vector;
The terminal acquires a third vanishing point position of the current field of view of the terminal camera;
The terminal obtains a fourth vanishing point position of the current field of view of the vehicle-mounted camera;
And the terminal determines the first rotation matrix corresponding to the third vanishing point position, the fourth vanishing point position and the initial rotation matrix according to a preset fourth rule.
Optionally, the target depth information includes a distance between the target object and the target vehicle;
Optionally, the determining, by the terminal, the target depth information of the target object corresponding to the first image, the second image and the first relative position relationship according to a preset second rule includes:
the terminal performs image preprocessing on the first image and the second image;
the terminal performs feature extraction on the first image after image pretreatment and the second image after image pretreatment;
The terminal performs characteristic stereo matching on the first image after image preprocessing and the second image after image preprocessing to obtain first matching parallax;
The terminal determines the distance according to the first matching parallax, the first internal parameter matrix, the first distortion coefficient matrix, the second internal parameter matrix, the second distortion coefficient matrix, the first rotation matrix and the first translation vector.
A second aspect of an embodiment of the present application provides a terminal, including: and a memory and a processor connected to each other, wherein the memory is configured to store a computer program, which when executed by the processor is configured to implement the method flow of the first aspect.
A third aspect of the embodiments of the present application provides a computer program product comprising instructions which, when run on a computer, comprise computer software instructions loadable by a processor to implement the method flow of the first aspect described above.
A fourth aspect of the embodiments of the present application provides a computer readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method flow of the first aspect described above.
From the above technical solutions, the embodiment of the present application has the following advantages:
When the positions of the terminal and the vehicle-mounted camera are first relative positions, the terminal obtains the current first relative position relation of the terminal camera and the vehicle-mounted camera according to a preset first rule, the first relative position is the position when the terminal camera and the vehicle-mounted camera are not overlapped, the terminal obtains a first image of a target object and a second image of the target object, the first image is an image collected by the terminal camera, the second image is an image collected by the vehicle-mounted camera, the first image and the second image are collected at the same moment, the terminal determines target depth information of the target object corresponding to the first image, the second image and the first relative position relation according to a preset second rule, and the terminal determines the target depth information of the target object by using a binocular vision method.
Drawings
FIG. 1 is a diagram illustrating a binocular vision application system architecture in accordance with an embodiment of the present application;
FIG. 2 is a schematic diagram of an embodiment of a control method for binocular vision application in an embodiment of the present application;
Fig. 3a is a schematic diagram of a terminal placement scenario in an embodiment of the present application;
Fig. 3b is a schematic diagram of another terminal placement scenario in an embodiment of the present application;
FIG. 4 is a schematic diagram of another embodiment of a control method for binocular vision application in an embodiment of the present application;
FIG. 5 is a schematic diagram of another embodiment of a control method for binocular vision application in an embodiment of the present application;
FIG. 6 is a schematic diagram of another embodiment of a control method for binocular vision application in an embodiment of the present application;
FIG. 7 is a schematic diagram of an embodiment of a terminal according to an embodiment of the present application;
fig. 8 is a schematic diagram of another embodiment of a terminal according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a control method and a terminal for binocular vision application, wherein the terminal determines target depth information of a target object by using a binocular vision method, compared with the prior art which uses a high-precision laser radar, a large amount of debugging and correcting time is reduced, and the binocular vision adopts a passive mode for ranging, so that interference phenomenon in the detection process is reduced.
Referring to fig. 1, the binocular vision application system architecture includes a terminal 101, a target vehicle 102, and a vehicle-mounted camera 103, where the vehicle-mounted camera 103 is mounted on the target vehicle 102 and can be connected to a vehicle-mounted computer of the target vehicle 102 through a USB cable or bluetooth, and the invention is not limited herein. The terminal 101 may be placed on a fixed support of the target vehicle 102, and may be connected to an on-board computer of the target vehicle 102 through WIFI, bluetooth, or the like, which is not limited herein. The vehicle-mounted computer of the target vehicle 102 can be a special vehicle informatization product which is specially developed for the special running environment of the vehicle and the characteristics of electric circuits, has the functions of high temperature resistance, dust resistance and shock resistance, and can be fused with the vehicle electronic circuits, and the main functions comprise vehicle-mounted all-purpose multimedia entertainment, GPS satellite navigation, professional diagnosis of vehicle information and faults, movable office and industry application and the like.
The in-vehicle camera 103 may be mounted at a taillight, front and rear license plates, left and right rear view mirrors, etc. of the target vehicle 102, and is not limited thereto, and the in-vehicle camera 103 may be classified into a front view camera and a rear view camera according to the mounting position and function, and may be classified into a monocular camera, a binocular camera, and a multi-ocular camera according to the division of the usage method, and is not limited thereto, and in the following embodiments, only a monocular front view in-vehicle camera will be described as an example. After the vehicle-mounted camera 103 enters a working state, the collected video information or image information can be sent to a wireless receiver of the vehicle-mounted camera 103 through a wireless transmitter of the vehicle-mounted camera 103, and then the wireless receiver of the vehicle-mounted camera 103 sends the video information or image information to a vehicle-mounted computer of the target vehicle 102 through an AV IN interface.
The terminal 101 is configured with a camera and a GPU accelerometer, can synchronously collect video information or image information of the same target object with the vehicle-mounted camera 103, can establish indirect connection with the vehicle-mounted camera 103 through a vehicle-mounted computer of the target vehicle 102, and can acquire video information or image information and the like collected by the vehicle-mounted camera 103.
Referring to fig. 2, an embodiment of a control method for binocular vision application in an embodiment of the present application includes:
201. the terminal is connected with a vehicle-mounted camera of a target vehicle;
The terminal may be mounted on a stand of the target vehicle, the stand of the target vehicle may be near a front seat near a front glass of the target vehicle, and the stand of the target vehicle may be near a rear seat near a rear glass of the target vehicle, so that the terminal camera may capture a target object outside a window of the target vehicle, and the terminal camera may capture a target object inside the target vehicle, which is not limited herein, and in this and subsequent embodiments, only the stand of the target vehicle near the front seat near the front glass of the target vehicle, and the terminal camera captures a target object outside the window of the target vehicle in front of the target vehicle, is described as an example.
The vehicle-mounted camera may be a monocular camera or a binocular camera, and is not limited herein, and in this embodiment and the subsequent embodiments, only the monocular camera is used as an example of the vehicle-mounted camera for description. The vehicle-mounted camera can be a rear vehicle-mounted camera, a front vehicle-mounted camera, a left rear view mirror and a right rear view mirror of the target vehicle, a rear license plate or a tail lamp of the target vehicle, and a vehicle-mounted camera can be mounted in the vehicle, and is not limited in the specific place; the vehicle-mounted camera may be used to capture a target object outside the window of the target vehicle, or may be used to capture a target object inside the target vehicle, and the specific example is not limited thereto, and in this and subsequent embodiments, only a target object outside the window of the target vehicle and located in front of the target vehicle is captured by the vehicle-mounted camera as an example.
IN this embodiment, the terminal may establish bluetooth or WIFI connection with the vehicle-mounted computer of the target vehicle, which is not limited IN specific, and the vehicle-mounted camera may establish connection with the vehicle-mounted computer of the target vehicle through its own wireless transmitter and wireless receiver and AV IN interface, or may establish connection with the vehicle-mounted computer through a USB cable, which is not limited IN specific, and the terminal and the vehicle-mounted camera may indirectly establish connection through the vehicle-mounted computer of the target vehicle.
The terminal may process the image or video information acquired by the terminal camera, the terminal may also receive and process the image or video information acquired by the vehicle-mounted camera, the terminal may also control the vehicle-mounted camera and the terminal camera to shoot the image or video of the same target object at the same time, the terminal may be a mobile phone with a GPU accelerator, for example iphoneX, the terminal may also be a tablet computer with a high-speed processing capability CPU, and in this embodiment and the subsequent embodiments, only the mobile phone is used as an example of the terminal to be described.
202. The terminal acquires the current first relative position relation between the terminal camera and the vehicle-mounted camera according to a preset first rule;
in this embodiment, when the positions of the terminal camera and the vehicle-mounted camera are the first relative positions, the terminal may obtain the current first relative positional relationship between the terminal camera and the vehicle-mounted camera according to a preset first rule, where the positions of the terminal camera mounted on the target vehicle bracket and the vehicle-mounted camera are not coincident, and specifically, referring to fig. 3a and 3b, the positions of the terminal mounted near the driver seat of the target vehicle near the front glass of the target vehicle and the vehicle-mounted camera mounted outside the target vehicle are not coincident.
The first relative positional relationship is a positional relationship when the positions of the terminal camera and the vehicle-mounted camera are the first relative positions, and may be described by a first rotation matrix and a first translation vector, or may be described in other forms, for example, a set of equations, which are not limited herein, and in this embodiment and the subsequent embodiments, only the first rotation matrix and the first translation vector are described as examples.
203. The terminal acquires a first image of a target object and a second image of the target object;
When the positions of the terminal camera and the vehicle-mounted camera keep the first relative position described in the step 202, the terminal may acquire a first image of the target object acquired by the terminal camera and acquire a second image acquired by the vehicle-mounted camera, where the first image and the second image are images acquired at the same time, the terminal may control the terminal camera to acquire and store the image of the target object according to a preset period, the terminal camera may also constantly acquire the image of the target object, which is not limited in specific terms, the terminal may control the vehicle-mounted camera to acquire and send the image of the target object to the terminal according to the preset period, and the vehicle-mounted camera may also constantly acquire and send the image of the target object to the terminal according to a preset reporting period, which is not limited in specific terms. The positions of the terminal camera and the vehicle-mounted camera are not overlapped, so that the first image and the second image are images of different shooting angles of the target object at the same moment.
204. And the terminal determines target depth information of the target object corresponding to the first image, the second image and the first relative position relation according to a preset second rule.
In this embodiment, the terminal may determine, according to a preset second rule, target depth information of the target object corresponding to the first image, the second image, and the first relative positional relationship.
The target depth information of the target object may be a distance between the target object and the target vehicle, or may be reconstructed three-dimensional information including a shape, a running speed, etc. of the target object, which is not limited herein, and in this embodiment and the subsequent embodiments, only the distance between the target object and the target vehicle is described as an example of the target depth information of the target object.
In the embodiment of the application, the terminal is connected with the vehicle-mounted camera of the target vehicle, when the position of the terminal and the vehicle-mounted camera is a first relative position, the terminal acquires the current first relative position relation of the terminal camera and the vehicle-mounted camera according to the preset first rule, the first relative position is the position when the terminal camera and the vehicle-mounted camera are not overlapped, the terminal acquires the first image of the target object and the second image of the target object, the first image is the image of the target object acquired by the terminal camera, the second image is the image acquired by the vehicle-mounted camera, the first image and the second image are acquired at the same moment, the terminal determines the target depth information of the target object corresponding to the first image, the second image and the first relative position relation according to the preset second rule, and the terminal determines the target depth information of the target object by using a binocular vision method.
In the embodiment of the application, the terminal obtains the current first relative position relationship between the terminal camera and the vehicle-mounted camera according to the preset first rule in various modes, and the following descriptions are respectively provided:
1. When the positions of the camera of the mobile phone and the vehicle-mounted camera are the first relative positions, the mobile phone obtains the current first accelerometer reading of the accelerometer of the mobile phone, obtains the current first gyroscope reading of the gyroscope of the mobile phone, obtains the current second accelerometer reading of the accelerometer of the target vehicle and the current second gyroscope reading of the gyroscope of the target vehicle, and determines a first rotation matrix in the first relative position relation according to the first accelerometer reading, the first gyroscope reading, the second accelerometer reading and the second gyroscope reading;
401. the mobile phone is connected with a vehicle-mounted camera of a target vehicle;
IN this embodiment, the mobile phone may be connected to the vehicle-mounted computer of the target vehicle by bluetooth or WIFI, which is not limited herein, and the vehicle-mounted camera may be connected to the vehicle-mounted computer of the target vehicle through its own wireless transmitter and wireless receiver and AV IN interface, or may be connected to the vehicle-mounted computer through a USB cable, which is not limited herein, and the mobile phone and the vehicle-mounted camera may be indirectly connected to the vehicle-mounted computer of the target vehicle.
402. The mobile phone calls an OpenCV program to determine a first internal parameter matrix and a first distortion coefficient matrix of a camera of the mobile phone;
403. The mobile phone calls an OpenCV program to determine a second internal parameter matrix and a second distortion coefficient matrix of the vehicle-mounted camera;
404. The mobile phone calls an OpenCV program to determine an initial translation vector in an initial relative position relation between a camera of the mobile phone and the vehicle-mounted camera;
In this embodiment, when the positions of the camera of the mobile phone and the vehicle-mounted camera are the preset initial calibration positions, the mobile phone may determine, according to a target algorithm, an initial translation vector in the current initial relative positional relationship between the camera of the mobile phone and the vehicle-mounted camera, the first internal parameter matrix and the first distortion coefficient matrix of the terminal camera, the second internal parameter matrix and the second distortion coefficient matrix of the vehicle-mounted camera, and the camera of the mobile phone.
When determining the initial translation vectors of the camera of the mobile phone and the vehicle-mounted camera, the coordinate system of the camera of the mobile phone may be selected as a reference coordinate system, or the coordinate system of the vehicle-mounted camera may be selected as a reference coordinate system, which is not limited herein.
It should be noted that, the internal parameters of the camera include: 1/dx, 1/dy, r, u0 and v0, wherein dx and dy represent how many units one pixel in the x direction and y direction respectively occupies, are key points reflecting the conversion of the physical coordinate relation of the image and the pixel coordinate system in reality, and u0 and v0 represent the horizontal and vertical pixel numbers of the phase difference between the central pixel coordinate of the image and the pixel coordinate of the origin of the image; the distortion parameters include: radial distortion coefficient and tangential distortion coefficient, k1, k2, k3 radial distortion coefficient, p1, p2 are tangential distortion coefficients, wherein radial distortion occurs during the transformation of the camera coordinate system into the physical coordinate system, and the tangential distortion occurs because the lens is not perfectly parallel to the image.
In image measurement processes and machine vision applications, in order to determine the correlation between the three-dimensional geometric position of a point on the surface of a spatial object and its corresponding point in the image, a geometric model of camera imaging must be established, and these geometric model parameters are camera parameters, and the process of solving the parameters is called camera calibration. In this embodiment, the first internal parameter matrix, the first distortion coefficient matrix, the second internal parameter matrix and the second distortion coefficient matrix may be determined by a calibration method, the initial calibration may be performed by a conventional camera calibration method, the target algorithm may be a direct linear transformation, a plane calibration method of Zhang Zhengyou, or a circle calibration method, which is not specifically limited herein; the specific implementation manner of the target algorithm may be that the mobile phone calls an open source computer visual function library (open source computer vision library, openCV) program, or that the mobile phone calls another written program, which is not limited herein, and in this embodiment and the subsequent embodiments, only the mobile phone calls the OpenCV program as an example to be described.
In this embodiment, the initial calibration positions of the terminal camera and the vehicle-mounted camera may be selected randomly, or may be in a certain arrangement manner, for example, the optical axis of the terminal camera is parallel to the optical axis of the vehicle-mounted camera, the X axis coincides with the Y axis, and the present invention is not limited thereto. The longer the baseline length between the optical center of the terminal camera and the optical center of the vehicle-mounted camera is, the more accurate the distance between the terminal camera and the vehicle-mounted camera obtained in subsequent calculation is.
In this embodiment, there is no fixed execution sequence between the step 402, the step 403 and the step 404, the step 402 may be executed first, the step 403 may be executed first, the step 404 may be executed first, or the step 402, the step 403 and the step 404 may be executed simultaneously according to the situation, which is not limited herein.
405. The mobile phone determines that the initial translation vector is a first translation vector in a first relative position relation between a camera of the mobile phone and the vehicle-mounted camera;
When the mobile phone is mounted on the support of the target vehicle, during the use process, the mobile phone may deflect, so that the relative position of the mobile phone and the vehicle-mounted camera may deflect angularly relative to the initial calibration position described in step 404, for example, the positions of the mobile phone and the vehicle-mounted camera shown in fig. 3a are initial calibration positions, for example, the positions of the mobile phone and the vehicle-mounted camera shown in fig. 3b are first relative positions. The mobile phone is installed on the support of the target vehicle, and in the using process, the mobile phone basically does not have position deviation because of the fixed support, and when the positions of the camera of the mobile phone and the vehicle-mounted camera are not coincident and are the first relative positions, the mobile phone can determine that the value of the initial translation vector described in the step 404 can be taken as the value of the first translation vector in the first relative position relation of the camera of the mobile phone and the vehicle-mounted camera; the mobile phone may also calculate the first translation vector according to a standard method, which is not limited herein, and in this embodiment and the subsequent embodiments, only the value of the initial translation vector is taken as an example of the value of the first translation vector by the mobile phone.
406. The mobile phone reads the current first accelerometer reading of the accelerometer of the mobile phone and calculates to obtain a first angle value in the X direction and a first angle value in the Z direction;
In this embodiment, the accelerometer of the mobile phone may collect data all the time, and the accelerometer of the mobile phone may be three-axis or six-axis, which is not limited herein, and when the positions of the camera of the mobile phone and the vehicle-mounted camera are not coincident and are the first relative positions, the mobile phone may determine the first accelerometer reading of the accelerometer of the mobile phone according to a certain reading frequency, for example, 10 frames of reading once and 20 frames of reading once, which is not limited herein. After the mobile phone obtains the first accelerometer reading, the first angle value in the X direction and the first angle value in the Z direction can be obtained through calculation according to a standard method, and details are not described here.
407. The mobile phone obtains the current second accelerometer reading of the accelerometer of the target vehicle and calculates to obtain a second angle value in the X direction and a second angle value in the Z direction;
In this embodiment, the accelerometer of the target vehicle may collect data all the time, and the accelerometer of the target vehicle may be three-axis or six-axis, which is not limited herein, and when the positions of the camera of the mobile phone and the vehicle-mounted camera are not coincident and are the first relative positions, the mobile phone may determine the second accelerometer reading of the accelerometer of the target vehicle according to a certain acquisition frequency, for example, 10 frames read once and 20 frames read once, which is not limited herein. The mobile phone may obtain the second accelerometer reading of the target vehicle by reading interface data of the on-board diagnostic system (on board diagnostic, OBD), or may obtain the second accelerometer reading of the target vehicle by an on-board computer of the target vehicle, which is not limited herein. After the mobile phone obtains the second accelerometer reading, the second angle value in the X direction and the second angle value in the Z direction can be obtained through calculation according to a standard method, and details are not described here.
408. The mobile phone subtracts the first angle value in the X direction from the second angle value in the X direction to obtain a plane rotation angle around the X axis, and subtracts the first angle value in the Z direction from the second angle value in the Z direction to obtain a plane rotation angle around the Z axis;
In this embodiment, the mobile phone may subtract the first angle value in the X direction obtained in the foregoing step 406 from the second angle value in the X direction obtained in the foregoing step 407 to obtain a plane rotation angle around the X axis, and the mobile phone may subtract the second angle value in the Z direction obtained in the foregoing step 407 from the first angle value in the Z direction obtained in the foregoing step 406 to obtain a plane rotation angle around the Z axis.
409. The mobile phone reads the current first gyroscope reading of the gyroscope of the mobile phone and calculates to obtain a first angle value in the Y direction;
in this embodiment, the gyroscope of the mobile phone may collect data all the time, and when the positions of the camera of the mobile phone and the vehicle-mounted camera do not coincide and are the first relative positions, the mobile phone may determine the first gyroscope reading of the gyroscope of the mobile phone according to a certain reading frequency, for example, 10 frames read once and 20 frames read once, which is not limited in this embodiment. After the mobile phone obtains the first gyroscope reading, the first angle value in the Y direction can be obtained through calculation according to a standard method, and details are not described herein.
410. The mobile phone obtains the current second gyroscope reading of the gyroscope of the target vehicle and calculates to obtain a second angle value in the Y direction;
In this embodiment, the gyroscope of the target vehicle may collect data all the time, and when the positions of the camera of the mobile phone and the vehicle-mounted camera do not coincide and are the first relative positions, the mobile phone may determine the second gyroscope reading of the gyroscope of the target vehicle according to a certain acquisition frequency, for example, 10 frames of reading once and 20 frames of reading once, which is not limited in this embodiment. The mobile phone may obtain the second gyroscope reading of the target vehicle by reading interface data of the on-board diagnostic system (on board diagnostic, OBD), or may obtain the second gyroscope reading of the target vehicle by an on-board computer of the target vehicle, which is not limited herein. After the mobile phone obtains the reading of the second gyroscope, the second angle value in the Y direction can be obtained through calculation according to a standard method, and details are not described here.
411. The mobile phone subtracts the first angle value in the Y direction from the second angle value in the Y direction to obtain a plane rotation angle around the Y axis;
In this embodiment, the mobile phone may subtract the first angle value in the Y direction obtained in the foregoing step 403 from the second angle value in the Y direction obtained in the foregoing step 410 to obtain the plane rotation angle around the Y axis.
In this embodiment, steps 406 to 409 are a process of determining a plane rotation angle around the X axis and a plane rotation angle around the Z axis, steps 410 to 411 are a process of determining a plane rotation angle around the Y axis, a process of determining a plane rotation angle around the X axis and a plane rotation angle around the Z axis, and a process of determining a plane rotation angle around the Y axis are not fixed in execution order, and the process of determining a plane rotation angle around the Y axis may be performed first, or the process of determining a plane rotation angle around the X axis and a plane rotation angle around the Z axis and the process of determining a plane rotation angle around the Y axis may be performed simultaneously according to circumstances, which is not limited herein.
412. The mobile phone calculates a first rotation matrix in a first relative position relation between a camera of the mobile phone and the vehicle-mounted camera according to the plane rotation angle around the X axis, the plane rotation angle around the Y axis and the plane rotation angle around the Z axis;
in this embodiment, the mobile phone may calculate, according to a standard method, a first rotation matrix in a first relative positional relationship between a camera of the mobile phone and a vehicle-mounted camera by using a plane rotation angle around an X axis, a plane rotation angle around a Y axis, and a plane rotation angle around a Z axis, which will not be described in detail herein.
413. The mobile phone acquires a first image of a target object acquired by a camera of the mobile phone and a second image of the target object acquired by a vehicle-mounted camera at the same moment as the first image;
in this embodiment, when the positions of the camera of the mobile phone and the vehicle-mounted camera are not coincident and are the first relative positions, the camera of the mobile phone and the vehicle-mounted camera can synchronously acquire images or videos of the target object in front of the target vehicle, the images or videos acquired by the vehicle-mounted camera can be transmitted to the vehicle-mounted computer of the target vehicle, the mobile phone can acquire the images or videos acquired by the vehicle-mounted camera through the vehicle-mounted computer, the mobile phone can select a first image of the target object acquired by the camera of the mobile phone at a certain moment and a second image of the target object acquired by the vehicle-mounted camera at the same moment, and the first image and the second image are images of different shooting angles of the target object at the same moment due to the fact that the positions of the camera of the mobile phone and the vehicle-mounted camera are not coincident.
414. The mobile phone performs image preprocessing on the first image and the second image;
In this embodiment, the mobile phone may perform image preprocessing on the first image and the second image according to a standard method, for example, sequentially perform graying, smoothing denoising, image segmentation and edge detection on the first image and the second image, which will not be described in detail herein.
415. The mobile phone extracts the characteristics of the first image after the image pretreatment and the second image after the image pretreatment;
In this embodiment, the mobile phone may perform feature extraction on the first image after image preprocessing and the second image after image preprocessing according to a standard method, so as to select a factor that can best reflect the attribute feature of the target object from the image data of the first image after image preprocessing and the second image after image preprocessing, for subsequent stereo matching, which is not described in detail herein.
416. The mobile phone performs three-dimensional matching of features on the first image after image pretreatment and the second image after image pretreatment to obtain a first matching parallax;
In this embodiment, after the mobile phone executes the foregoing step 415, the stereo matching of the features may be performed according to a standard method, and a suitable stereo matching algorithm is selected to perform a corresponding operation, which is not described herein.
In this embodiment, steps 405 to 412 are the process of determining the first translation vector and the first rotation matrix for the mobile phone, and steps 413 to 416 are the process of determining the first matching parallax, and there is no fixed execution sequence between the process of determining the first translation vector and the first rotation matrix and the process of determining the first matching parallax, and the process of determining the first translation vector and the first rotation matrix may be performed first, or the process of determining the first matching parallax may be performed first, or the process of determining the first translation vector and the first rotation matrix and the process of determining the first matching parallax may be performed simultaneously according to the situation, which is not limited herein.
417. And the mobile phone determines the distance between the target object and the target vehicle according to the first matching parallax, the first internal parameter matrix, the first distortion coefficient matrix, the second internal parameter matrix, the second distortion coefficient matrix, the first rotation matrix and the first translation vector.
In this embodiment, the mobile phone may determine the distance between the target object and the target vehicle according to the first matching parallax, the first internal parameter matrix, the first distortion coefficient matrix, the second internal parameter matrix, the second distortion coefficient matrix, the first rotation matrix and the first translation vector according to a standard method, which is not described herein in detail.
In this embodiment, when the position of the terminal and the vehicle-mounted camera is a first relative position, the terminal obtains a current first relative position relationship between the terminal camera and the vehicle-mounted camera according to a preset first rule, the first relative position is a position when the terminal camera and the vehicle-mounted camera do not coincide, the terminal obtains a first image of a target object and a second image of the target object, the first image is an image of the target object collected by the terminal camera, the second image is an image collected by the vehicle-mounted camera, the first image and the second image are collected at the same moment, the terminal determines target depth information of the target object corresponding to the first image, the second image and the first relative position relationship according to a preset second rule, and the terminal determines the target depth information of the target object by using a binocular vision method.
2. When the positions of the camera of the mobile phone and the vehicle-mounted camera are the first relative positions, the mobile phone obtains a third vanishing point position of the current field of view of the camera of the mobile phone and a fourth vanishing point position of the current field of view of the vehicle-mounted camera, and the mobile phone determines a first rotation matrix in a first relative position relation corresponding to the third vanishing point position, the fourth vanishing point position and the initial rotation matrix according to a preset fourth rule;
in this embodiment, steps 501 to 503 are similar to steps 401 to 403 described in fig. 4, and are not repeated here.
504. The mobile phone calls an OpenCV program to determine an initial translation vector and an initial rotation matrix in an initial relative position relation between a camera of the mobile phone and the vehicle-mounted camera;
In this embodiment, when the positions of the camera of the mobile phone and the vehicle-mounted camera are the preset initial calibration positions, the mobile phone may call the OpenCV program to determine an initial translation vector and an initial rotation matrix in the current initial relative positional relationship between the camera of the mobile phone and the vehicle-mounted camera.
When determining the initial translation vector and the initial rotation matrix of the camera of the mobile phone and the vehicle-mounted camera, the coordinate system of the camera of the mobile phone may be selected as a reference coordinate system, or the coordinate system of the vehicle-mounted camera may be selected as a reference coordinate system, which is not limited herein.
In this embodiment, there is no fixed execution sequence between the step 502, the step 503 and the step 504, the step 502 may be executed first, the step 503 may be executed first, the step 504 may be executed first, or the step 502, the step 503 and the step 504 may be executed simultaneously according to the situation, which is not limited herein.
Step 505 in this embodiment is similar to step 405 described in fig. 4, and is not repeated here.
506. The mobile phone obtains a third vanishing point position of the current field of view of the camera of the mobile phone, and a third X value and a third Y value on a coordinate system based on the camera of the mobile phone are obtained;
in this embodiment, the mobile phone may obtain the third vanishing point position of the current field of view of the camera of the mobile phone according to the parallel perspective principle by using the coordinate system of the camera of the mobile phone as the reference coordinate system according to the standard method, so as to obtain the third X value and the third Y value of the third vanishing point position in the coordinate system of the camera of the mobile phone, which will not be described in detail.
507. The mobile phone obtains a fourth vanishing point position of the current field of view of the vehicle-mounted camera, and a fourth X value and a fourth Y value on a coordinate system of the camera based on the mobile phone are obtained;
In this embodiment, the mobile phone may obtain the fourth vanishing point position of the current field of view of the vehicle-mounted camera according to the parallel perspective principle by using the coordinate system of the camera of the mobile phone as the reference coordinate system, to obtain the fourth X value and the fourth Y value of the fourth vanishing point position in the coordinate system of the camera of the mobile phone, which are not described in detail herein.
In this embodiment, there is no fixed execution sequence between the step 506 and the step 507, the step 506 may be executed first, the step 507 may be executed first, or the step 506 and the step 507 may be executed simultaneously according to circumstances, which is not limited herein.
508. Substituting a fourth X value, a third X value, the resolution of the mobile Phone and a second view angle of a camera of the mobile Phone in the current X direction into a formula R.a = (Phone_VP.x-Car_VP.x)/res_x FOV_x to obtain a plane rotation angle around an X axis;
The Phone can calculate the plane rotation angle around the X-axis by the formula R.a = (phone_vp.x-car_vp.x)/res_x, wherein R.a represents the plane rotation angle around the X-axis, phone_vp.x represents the third X-value, car_vp.x represents the fourth X-value, res_x represents the number of pixels of the Phone along the X-axis, and fov_x represents the second view angle of the camera of the Phone in the current X-direction.
509. Substituting a fourth Y value, a third Y value, the resolution of the mobile Phone and a second view angle of a camera of the mobile Phone in the current Y direction into a formula R.b = (Phone_VP.y-Car_VP.y)/res_y by the mobile Phone to obtain a plane rotation angle around a Y axis;
the Phone can calculate the plane rotation angle around the Y-axis by formula R.b = (phone_vp.y-car_vp.y)/res_y, where R.b represents the plane rotation angle around the Y-axis, phone_vp.y represents the third Y-value, car_vp.y represents the fourth Y-value, res_y represents the number of pixels of the Phone along the Y-axis, and fov_y represents the second view angle of the current Y-direction of the camera of the Phone.
In this embodiment, there is no fixed execution sequence between the step 508 and the step 509, the step 508 may be executed first, the step 509 may be executed first, or the step 508 and the step 509 may be executed simultaneously according to circumstances, which is not limited herein.
510. The mobile phone calculates a first rotation matrix in a first relative position relation between a camera of the mobile phone and the vehicle-mounted camera according to the plane rotation angle around the X axis, the plane rotation angle around the Y axis and the plane rotation angle around the Z axis in the initial rotation matrix;
When the mobile phone is mounted on the support of the target vehicle, during the use process, the relative position of the mobile phone and the vehicle-mounted camera may deflect by an angle relative to the initial calibration position described in step 504, the deflection is generally deflection in the X-axis direction and deflection in the Y-axis direction, and the deflection in the Z-axis direction may not occur basically.
In this embodiment, steps 511 to 515 are similar to steps 413 to 417 described in fig. 4, and are not repeated here.
In this embodiment, steps 505 to 510 are the process of determining the first translation vector and the first rotation matrix for the mobile phone, and steps 511 to 514 are the process of determining the first matching parallax, and there is no fixed execution sequence between the process of determining the first translation vector and the first rotation matrix and the process of determining the first matching parallax, and the process of determining the first translation vector and the first rotation matrix may be performed first, or the process of determining the first matching parallax may be performed first, or the process of determining the first translation vector and the first rotation matrix and the process of determining the first matching parallax may be performed simultaneously according to the situation, which is not limited herein.
In this embodiment, when the position of the terminal and the vehicle-mounted camera is a first relative position, the terminal obtains a current first relative position relationship between the terminal camera and the vehicle-mounted camera according to a preset first rule, the first relative position is a position when the terminal camera and the vehicle-mounted camera do not coincide, the terminal obtains a first image of a target object and a second image of the target object, the first image is an image of the target object collected by the terminal camera, the second image is an image collected by the vehicle-mounted camera, the first image and the second image are collected at the same moment, the terminal determines target depth information of the target object corresponding to the first image, the second image and the first relative position relationship according to a preset second rule, and the terminal determines the target depth information of the target object by using a binocular vision method.
3. When the positions of the camera of the mobile phone and the vehicle-mounted camera are the first relative positions, the mobile phone obtains the current first reading of the accelerometer of the mobile phone and the current second reading of the accelerometer of the target vehicle, the mobile phone obtains the first vanishing point position of the current field of view of the camera of the mobile phone and the second vanishing point position of the current field of view of the vehicle-mounted camera, and the mobile phone determines a first rotation matrix in a first relative position relation corresponding to the first reading, the second reading, the first vanishing point position and the second vanishing point position according to a preset third rule;
In this embodiment, steps 601 to 605 are similar to steps 401 to 405 described in fig. 4, and detailed descriptions thereof are omitted.
606. The mobile phone reads the current first reading of the accelerometer of the mobile phone and calculates to obtain a first angle value in the X direction and a first angle value in the Z direction;
607. The mobile phone obtains the current second reading of the accelerometer of the target vehicle and calculates to obtain a second angle value in the X direction and a second angle value in the Z direction;
608. The mobile phone subtracts the first angle value in the X direction from the second angle value in the X direction to obtain a plane rotation angle around the X axis, and subtracts the first angle value in the Z direction from the second angle value in the Z direction to obtain a plane rotation angle around the Z axis;
In this embodiment, steps 606 to 608 are similar to steps 406 to 408 described in fig. 4, and detailed descriptions thereof are omitted herein.
609. The mobile phone obtains a first vanishing point position of a current field of view of a camera of the mobile phone, and a first X value and a first Y value on a coordinate system based on the camera of the mobile phone are obtained;
610. The mobile phone obtains a second vanishing point position of the current field of view of the vehicle-mounted camera to obtain a second X value and a second Y value on a coordinate system of the camera based on the mobile phone;
611. Substituting a second X value, a first X value, the resolution of the mobile Phone and a first view angle of a camera of the mobile Phone in the current X direction into a formula R.a = (Phone_VP.x-Car_VP.x)/res_x FOV_x to obtain a plane rotation angle around an X axis;
612. Substituting a second Y value, a first Y value, the resolution of the mobile Phone and a first view angle of a camera of the mobile Phone in the current Y direction into a formula R.b = (Phone_VP.y-Car_VP.y)/res_y by the mobile Phone to obtain a plane rotation angle around a Y axis;
In this embodiment, steps 609 to 612 are similar to steps 506 to 509 described in fig. 5, and detailed descriptions thereof are omitted herein.
In this embodiment, steps 606 to 608 are the process of determining the plane rotation angle around the X axis and the plane rotation angle around the Z axis for the mobile phone, and steps 609 to 612 are the process of determining the plane rotation angle around the X axis and the plane rotation angle around the Y axis for the mobile phone, the process of determining the plane rotation angle around the X axis and the plane rotation angle around the Z axis, and the process of determining the plane rotation angle around the X axis and the plane rotation angle around the Z axis may be performed first, or the process of determining the plane rotation angle around the X axis and the plane rotation angle around the Y axis may be performed first, or the process of determining the plane rotation angle around the X axis and the plane rotation angle around the Z axis and the process of determining the plane rotation angle around the X axis and the plane rotation angle around the Y axis may be performed simultaneously according to circumstances, which is not limited herein.
In this embodiment, the process of determining the plane rotation angle around the X axis and the plane rotation angle around the Z axis for the mobile phone in steps 606 to 608, and the process of determining the plane rotation angle around the X axis and the plane rotation angle around the Y axis for the mobile phone in steps 609 to 612 may be repeated, the plane rotation angle around the X axis and the plane rotation angle around the Z axis may be determined in steps 606 to 608, only the plane rotation angle around the Y axis may be determined in steps 609 to 612, or only the plane rotation angle around the Z axis may be determined in steps 606 to 608, and the plane rotation angle around the X axis and the plane rotation angle around the Y axis may be determined in steps 609 to 612, which is not limited herein.
It should be noted that the vanishing point is a visual intersection point of parallel lines, for example, when you look at two rails along a railway line and look at trees aligned on both sides along the road line, the two parallel rails or two lines of tree lines intersect with a point at a distance, which is called a vanishing point in a perspective view.
613. The mobile phone calculates a first rotation matrix in a first relative position relation between a camera of the mobile phone and the vehicle-mounted camera according to the plane rotation angle around the X axis, the plane rotation angle around the Y axis and the plane rotation angle around the Z axis;
in this embodiment, the mobile phone may calculate, according to a standard method, a first rotation matrix in a first relative positional relationship between a camera of the mobile phone and a vehicle-mounted camera by using a plane rotation angle around an X axis, a plane rotation angle around a Y axis, and a plane rotation angle around a Z axis, which will not be described in detail herein.
In this embodiment, steps 614 to 618 are similar to steps 413 to 417 described in fig. 4, and are not repeated here.
In this embodiment, steps 605 to 613 are the process of determining the first translation vector and the first rotation matrix for the mobile phone, and steps 614 to 617 are the process of determining the first matching parallax, and there is no fixed execution sequence between the process of determining the first translation vector and the first rotation matrix and the process of determining the first matching parallax, and the process of determining the first translation vector and the first rotation matrix may be performed first, or the process of determining the first matching parallax may be performed first, or the process of determining the first translation vector and the first rotation matrix and the process of determining the first matching parallax may be performed simultaneously according to the situation, which is not limited herein.
In this embodiment, when the position of the terminal and the vehicle-mounted camera is a first relative position, the terminal obtains a current first relative position relationship between the terminal camera and the vehicle-mounted camera according to a preset first rule, the first relative position is a position when the terminal camera and the vehicle-mounted camera do not coincide, the terminal obtains a first image of a target object and a second image of the target object, the first image is an image of the target object collected by the terminal camera, the second image is an image collected by the vehicle-mounted camera, the first image and the second image are collected at the same moment, the terminal determines target depth information of the target object corresponding to the first image, the second image and the first relative position relationship according to a preset second rule, and the terminal determines the target depth information of the target object by using a binocular vision method.
The control method of the binocular vision application in the embodiment of the present application is described above, and the terminal in the embodiment of the present application is described below, referring to fig. 7, where an embodiment of the terminal in the embodiment of the present application includes:
a connection unit 701, configured to establish connection with a vehicle-mounted camera of a target vehicle;
The first obtaining unit 702 is configured to obtain, according to a preset first rule, a current first relative position relationship between the terminal camera and the vehicle-mounted camera when the positions of the terminal camera and the vehicle-mounted camera of the terminal are first relative positions, where the first relative positions are positions when the terminal camera and the vehicle-mounted camera do not coincide;
a second obtaining unit 703, configured to obtain a first image of the target object and a second image of the target object, where the first image is an image collected by the terminal camera, the second image is an image collected by the vehicle-mounted camera, and the first image and the second image are collected at the same time;
The first determining unit 704 is configured to determine target depth information of a target object corresponding to the first image, the second image, and the first relative positional relationship according to a preset second rule.
In this embodiment, the first obtaining unit 702 may determine the first rotation matrix in various manners, which may specifically be:
The first obtaining unit 702 is specifically configured to determine that the initial translation vector is a first translation vector; acquiring a current first reading of an accelerometer of the terminal; obtaining a current second reading of an accelerometer of the target vehicle; acquiring a first vanishing point position of a current field of view of a terminal camera; acquiring a second vanishing point position of the current field of view of the vehicle-mounted camera; and determining a first rotation matrix corresponding to the first reading, the second reading, the first vanishing point position and the second vanishing point position according to a preset third rule.
Or alternatively
The first obtaining unit 702 is specifically configured to determine that the initial translation vector is a first translation vector; acquiring a current first accelerometer reading of an accelerometer of the terminal; acquiring a current first gyroscope reading of a gyroscope of the terminal; obtaining a current second accelerometer reading of an accelerometer of the target vehicle; obtaining a current second gyroscope reading of a gyroscope of the target vehicle; a first rotation matrix is determined from the first accelerometer reading, the first gyroscope reading, the second accelerometer reading, and the second gyroscope reading.
Or alternatively
The first obtaining unit 702 is specifically configured to determine that the initial translation vector is a first translation vector; acquiring a third vanishing point position of the current field of view of the terminal camera; acquiring a fourth vanishing point position of the current field of view of the vehicle-mounted camera; and determining a first rotation matrix corresponding to the third vanishing point position and the fourth vanishing point position and the initial rotation matrix according to a preset fourth rule.
In this embodiment, the second acquiring unit 703 is specifically configured to perform image preprocessing on the first image and the second image; extracting features of the first image after image pretreatment and the second image after image pretreatment; performing feature stereo matching on the first image subjected to image preprocessing and the second image subjected to image preprocessing to obtain first matching parallax; the distance is determined from the first matched disparity, the first internal parameter matrix, the first distortion coefficient matrix, the second internal parameter matrix, the second distortion coefficient matrix, the first rotation matrix, and the first translation vector.
In this embodiment, the flow executed by each unit in the terminal is similar to the flow executed by the terminal described in the embodiments shown in fig. 4 to 6, and will not be repeated here.
In this embodiment, the connection unit 701 establishes connection with the vehicle-mounted camera of the target vehicle, when the position of the terminal and the vehicle-mounted camera is the first relative position, the first obtaining unit 702 obtains the current first relative position relationship between the terminal camera and the vehicle-mounted camera according to the preset first rule, the first relative position is the position when the terminal camera and the vehicle-mounted camera do not coincide, the second obtaining unit 703 obtains the first image of the target object and the second image of the target object, the first image is the image of the target object collected by the terminal camera, the second image is the image of the target object collected by the vehicle-mounted camera at the same time as the first image, the first determining unit 704 determines the target depth information of the target object corresponding to the first image, the second image and the first relative position relationship according to the preset second rule, and the first determining unit 704 determines the target depth information of the target object by using the binocular vision method.
In this embodiment, the terminal further includes:
The second determining unit 705 is configured to determine, according to a target algorithm, a first internal parameter matrix and a first distortion coefficient matrix of the terminal camera when positions of the terminal camera and the vehicle-mounted camera are preset initial calibration positions;
A third determining unit 706, configured to determine a second internal parameter matrix and a second distortion coefficient matrix of the vehicle camera according to a target algorithm;
and a fourth determining unit 707, configured to determine an initial translation vector in an initial relative positional relationship between the terminal camera and the vehicle-mounted camera according to a target algorithm.
In this embodiment, the terminal further includes:
and a fifth determining unit 708, configured to determine an initial rotation matrix in an initial relative positional relationship between the terminal camera and the vehicle-mounted camera according to a target algorithm.
The embodiment of the present application provides a terminal, as shown in fig. 8, for convenience of explanation, only the relevant parts of the embodiment of the present application are shown, and specific technical details are not disclosed, please refer to the method part of the embodiment of the present application. The terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal digital assistant), a POS (Point of Sales), a vehicle-mounted computer, and the like, taking the mobile phone as an example:
Fig. 8 is a block diagram showing a part of the structure of a mobile phone related to a terminal provided by an embodiment of the present application. Referring to fig. 8, the mobile phone includes: radio Frequency (RF) circuitry 810, memory 820, input unit 830, display unit 840, sensor 850, audio circuitry 860, wireless fidelity (WIRELESS FIDELITY, wiFi) module 870, processor 880, power supply 890, and the like. Those skilled in the art will appreciate that the handset configuration shown in fig. 8 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or may be arranged in a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 8:
The RF circuit 810 may be used for receiving and transmitting signals during a message or a call, and in particular, after receiving downlink information of a base station, it is processed by the processor 880; in addition, the data of the design uplink is sent to the base station. Typically, the RF circuitry 810 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (Low Noise Amplifier, LNA), a duplexer, and the like. In addition, the RF circuitry 810 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol including, but not limited to, global System for Mobile communications (Global System of Mobile communication, GSM), general Packet Radio Service (GPRS), code division multiple Access (Code Division Multiple Access, CDMA), wideband code division multiple Access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), email, short message Service (Short MESSAGING SERVICE, SMS), and the like.
The memory 820 may be used to store software programs and modules, and the processor 880 performs various functional applications and data processing of the cellular phone by executing the software programs and modules stored in the memory 820. The memory 820 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 820 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 830 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset. In particular, the input unit 830 may include a touch panel 831 and other input devices 832. The touch panel 831, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 831 or thereabout using any suitable object or accessory such as a finger, stylus, etc.), and actuate the corresponding connection device according to a predetermined program. Alternatively, the touch panel 831 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 880 and can receive commands from the processor 880 and execute them. In addition, the touch panel 831 may be implemented in various types of resistive, capacitive, infrared, surface acoustic wave, and the like. The input unit 830 may include other input devices 832 in addition to the touch panel 831. In particular, other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 840 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 840 may include a display panel 841, and optionally, the display panel 841 may be configured in the form of a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 831 may overlay the display panel 841, and when the touch panel 831 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 880 to determine the type of touch event, and the processor 880 then provides a corresponding visual output on the display panel 841 according to the type of touch event. Although in fig. 8, the touch panel 831 and the display panel 841 are implemented as two separate components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 831 and the display panel 841 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 850, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 841 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 841 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the handset are not described in detail herein.
Audio circuitry 860, speaker 861, microphone 862 may provide an audio interface between the user and the handset. The audio circuit 860 may transmit the received electrical signal converted from audio data to the speaker 861, and the electrical signal is converted into a sound signal by the speaker 861 to be output; on the other hand, microphone 862 converts the collected sound signals into electrical signals, which are received by audio circuit 860 and converted into audio data, which are processed by audio data output processor 880 for transmission to, for example, another cell phone via RF circuit 810, or which are output to memory 820 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive emails, browse webpages, access streaming media and the like through a WiFi module 870, so that wireless broadband Internet access is provided for the user. Although fig. 8 shows a WiFi module 870, it is understood that it does not belong to the necessary constitution of the handset, and can be omitted entirely as needed within the scope of not changing the essence of the invention.
The processor 880 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile phone and processes data by running or executing software programs and/or modules stored in the memory 820 and calling data stored in the memory 820, thereby performing overall monitoring of the mobile phone. In the alternative, processor 880 may include one or more processing units; preferably, the processor 880 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 880.
The handset further includes a power supply 890 (e.g., a battery) for powering the various components, which may be logically connected to the processor 880 through a power management system, as well as performing functions such as managing charge, discharge, and power consumption by the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
The embodiment of the application also provides a computer storage medium for storing computer software instructions for the terminal, which comprises a program designed for executing the terminal.
Embodiments of the present application also provide a computer program product comprising computer software instructions loadable by a processor to implement the method flows of the embodiments shown in fig. 4 to 6.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (5)

1. A method of controlling a binocular vision application, comprising:
the terminal is connected with a vehicle-mounted camera of a target vehicle;
when the positions of the terminal camera and the vehicle-mounted camera of the terminal are first relative positions, the terminal obtains the current first relative position relation between the terminal camera and the vehicle-mounted camera according to a preset first rule, and the first relative position is a position when the terminal camera and the vehicle-mounted camera are not overlapped;
The terminal acquires a first image of a target object and a second image of the target object, wherein the first image is an image acquired by a camera of the terminal, the second image is an image acquired by the vehicle-mounted camera, and the first image and the second image are acquired at the same moment;
The terminal determines target depth information of the target object corresponding to the first image, the second image and the first relative position relation according to a preset second rule;
Before the terminal obtains the current first relative position relation between the terminal camera and the vehicle-mounted camera according to a preset first rule, the method further comprises:
When the positions of the terminal camera and the vehicle-mounted camera are preset initial calibration positions, the terminal determines a first internal parameter matrix and a first distortion coefficient matrix of the terminal camera according to a target algorithm;
The terminal determines a second internal parameter matrix and a second distortion coefficient matrix of the vehicle-mounted camera according to the target algorithm;
The terminal determines an initial translation vector in the initial relative position relation of the terminal camera and the vehicle-mounted camera according to the target algorithm based on the coordinate system of the terminal camera as a reference coordinate system;
The first relative positional relationship includes a first rotation matrix and a first translation vector; the terminal obtaining the current first relative position relation between the terminal camera and the vehicle-mounted camera according to a preset first rule comprises the following steps: the terminal determines the initial translation vector as the first translation vector; the terminal acquires a current first reading of an accelerometer of the terminal, and calculates a first angle value in the X direction and a first angle value in the Z direction; the terminal acquires a current second reading number of an accelerometer of the target vehicle, and calculates a second angle value in the X direction and a second angle value in the Z direction; the terminal subtracts the first angle value in the X direction from the second angle value in the X direction to obtain a plane rotation angle around the X axis, and subtracts the first angle value in the Z direction from the second angle value in the Z direction to obtain a plane rotation angle around the Z axis; the terminal obtains a first vanishing point position of a current field of view of the terminal camera to obtain a first X value and a first Y value based on a coordinate system of the terminal camera; the terminal obtains a second vanishing point position of the current field of view of the vehicle-mounted camera to obtain a second X value and a second Y value based on a coordinate system of the terminal camera; the terminal obtains a plane rotation angle around an X axis according to the first X value, the second X value, the resolution of the terminal and a first view angle in the current X direction of the terminal camera; the terminal obtains a plane rotation angle around a Y axis according to the first Y value, the second Y value, the resolution of the terminal and a first view angle in the current Y direction of the terminal camera; the terminal calculates a first rotation matrix in a first relative position relation between the terminal camera and the vehicle-mounted camera according to the plane rotation angle around the X axis, the plane rotation angle around the Y axis and the plane rotation angle around the Z axis;
Or alternatively, the first and second heat exchangers may be,
The first relative positional relationship includes a first rotation matrix and a first translation vector; the terminal obtaining the current first relative position relation between the terminal camera and the vehicle-mounted camera according to a preset first rule comprises the following steps: the terminal determines the initial translation vector as the first translation vector; the method comprises the steps that the terminal obtains current first accelerometer readings of an accelerometer of the terminal, and a first angle value in the X direction and a first angle value in the Z direction are obtained through calculation; the terminal obtains the current second accelerometer reading of the accelerometer of the target vehicle, and calculates a second angle value in the X direction and a second angle value in the Z direction; the terminal subtracts the first angle value in the X direction from the second angle value in the X direction to obtain a plane rotation angle around the X axis, and subtracts the first angle value in the Z direction from the second angle value in the Z direction to obtain a plane rotation angle around the Z axis; the terminal obtains the current first gyroscope reading of the gyroscope of the terminal, and calculates to obtain a first angle value in the Y direction; the terminal obtains the current second gyroscope reading of the gyroscope of the target vehicle, and calculates a second angle value in the Y direction; the terminal subtracts the first angle value in the Y direction from the second angle value in the Y direction to obtain a plane rotation angle around the Y axis; the terminal calculates a first rotation matrix in a first relative position relation between the terminal camera and the vehicle-mounted camera according to the plane rotation angle around the X axis, the plane rotation angle around the Y axis and the plane rotation angle around the Z axis;
Or alternatively, the first and second heat exchangers may be,
The first relative position relationship comprises a first rotation matrix and a first translation vector, and before the terminal obtains the first relative position relationship between the terminal camera and the vehicle-mounted camera according to a preset first rule, the method further comprises: the terminal determines an initial rotation matrix in the initial relative position relation between the terminal camera and the vehicle-mounted camera according to the target algorithm; the terminal obtaining the current first relative position relation between the terminal camera and the vehicle-mounted camera according to a preset first rule comprises the following steps: the terminal determines the initial translation vector as the first translation vector; the terminal obtains a third vanishing point position of the current field of view of the terminal camera, and a third X value and a third Y value based on a coordinate system of the terminal camera are obtained; the terminal obtains a fourth vanishing point position of the current field of view of the vehicle-mounted camera, and a fourth X value and a fourth Y value on a coordinate system based on the terminal camera are obtained; the terminal obtains a plane rotation angle around an X axis according to the third X value, the fourth X value, the resolution of the terminal and a second view angle in the current X direction of the terminal camera; the terminal obtains a plane rotation angle around a Y axis according to the third Y value, the fourth Y value, the resolution of the terminal and a second view angle in the current Y direction of the terminal camera; and the terminal calculates a first rotation matrix in a first relative position relation between the terminal camera and the vehicle-mounted camera according to the plane rotation angle around the X axis, the plane rotation angle around the Y axis and the plane rotation angle around the Z axis in the initial rotation matrix.
2. The method of claim 1, wherein the objective algorithm comprises an open source computer vision function library.
3. The method of claim 1, wherein the target depth information comprises a distance of the target object from the target vehicle;
The terminal determining the target depth information of the target object corresponding to the first image, the second image and the first relative position relation according to a preset second rule comprises:
the terminal performs image preprocessing on the first image and the second image;
the terminal performs feature extraction on the first image after image pretreatment and the second image after image pretreatment;
The terminal performs characteristic stereo matching on the first image after image preprocessing and the second image after image preprocessing to obtain first matching parallax;
The terminal determines the distance according to the first matching parallax, the first internal parameter matrix, the first distortion coefficient matrix, the second internal parameter matrix, the second distortion coefficient matrix, the first rotation matrix and the first translation vector.
4. A terminal, comprising:
The connection unit is used for establishing connection with the vehicle-mounted camera of the target vehicle;
The first obtaining unit is used for obtaining the current first relative position relation between the terminal camera and the vehicle-mounted camera according to a preset first rule when the positions of the terminal camera and the vehicle-mounted camera are first relative positions, wherein the first relative positions are positions when the terminal camera and the vehicle-mounted camera are not overlapped;
The second acquisition unit is used for acquiring a first image of a target object and a second image of the target object, wherein the first image is an image acquired by the terminal camera, the second image is an image acquired by the vehicle-mounted camera, and the first image and the second image are acquired at the same moment;
A first determining unit, configured to determine target depth information of the target object corresponding to the first image, the second image, and the first relative positional relationship according to a preset second rule;
the second determining unit is used for determining a first internal parameter matrix and a first distortion coefficient matrix of the terminal camera according to a target algorithm when the positions of the terminal camera and the vehicle-mounted camera are preset initial calibration positions;
The third determining unit is used for determining a second internal parameter matrix and a second distortion coefficient matrix of the vehicle-mounted camera according to the target algorithm;
the fourth determining unit is used for determining an initial translation vector in the initial relative position relation between the terminal camera and the vehicle-mounted camera according to the target algorithm;
The first obtaining unit is specifically configured to obtain a first relative positional relationship including a first rotation matrix and a first translation vector; the terminal determines the initial translation vector as the first translation vector; the terminal acquires a current first reading of an accelerometer of the terminal, and calculates a first angle value in the X direction and a first angle value in the Z direction; the terminal acquires a current second reading number of an accelerometer of the target vehicle, and calculates a second angle value in the X direction and a second angle value in the Z direction; the terminal subtracts the first angle value in the X direction from the second angle value in the X direction to obtain a plane rotation angle around the X axis, and subtracts the first angle value in the Z direction from the second angle value in the Z direction to obtain a plane rotation angle around the Z axis; the terminal obtains a first vanishing point position of a current field of view of the terminal camera to obtain a first X value and a first Y value based on a coordinate system of the terminal camera; the terminal obtains a second vanishing point position of the current field of view of the vehicle-mounted camera to obtain a second X value and a second Y value based on a coordinate system of the terminal camera; the terminal obtains a plane rotation angle around an X axis according to the first X value, the second X value, the resolution of the terminal and a first view angle in the current X direction of the terminal camera; the terminal obtains a plane rotation angle around a Y axis according to the first Y value, the second Y value, the resolution of the terminal and a first view angle in the current Y direction of the terminal camera; the terminal calculates a first rotation matrix in a first relative position relation between the terminal camera and the vehicle-mounted camera according to the plane rotation angle around the X axis, the plane rotation angle around the Y axis and the plane rotation angle around the Z axis;
Or alternatively, the first and second heat exchangers may be,
The first obtaining unit is specifically configured to obtain a first relative positional relationship including a first rotation matrix and a first translation vector; the terminal determines the initial translation vector as the first translation vector; the method comprises the steps that the terminal obtains current first accelerometer readings of an accelerometer of the terminal, and a first angle value in the X direction and a first angle value in the Z direction are obtained through calculation; the terminal obtains the current second accelerometer reading of the accelerometer of the target vehicle, and calculates a second angle value in the X direction and a second angle value in the Z direction; the terminal subtracts the first angle value in the X direction from the second angle value in the X direction to obtain a plane rotation angle around the X axis, and subtracts the first angle value in the Z direction from the second angle value in the Z direction to obtain a plane rotation angle around the Z axis; the terminal obtains the current first gyroscope reading of the gyroscope of the terminal, and calculates to obtain a first angle value in the Y direction; the terminal obtains the current second gyroscope reading of the gyroscope of the target vehicle, and calculates a second angle value in the Y direction; the terminal subtracts the first angle value in the Y direction from the second angle value in the Y direction to obtain a plane rotation angle around the Y axis; the terminal calculates a first rotation matrix in a first relative position relation between the terminal camera and the vehicle-mounted camera according to the plane rotation angle around the X axis, the plane rotation angle around the Y axis and the plane rotation angle around the Z axis;
Or alternatively, the first and second heat exchangers may be,
The first obtaining unit is specifically configured to, before the first relative positional relationship between the terminal camera and the vehicle-mounted camera is obtained by the terminal according to a preset first rule, obtain a first rotation matrix and a first translation vector, where the method further includes: the terminal determines an initial rotation matrix in the initial relative position relation between the terminal camera and the vehicle-mounted camera according to the target algorithm; the terminal determines the initial translation vector as the first translation vector; the terminal obtains a third vanishing point position of the current field of view of the terminal camera, and a third X value and a third Y value based on a coordinate system of the terminal camera are obtained; the terminal obtains a fourth vanishing point position of the current field of view of the vehicle-mounted camera, and a fourth X value and a fourth Y value on a coordinate system based on the terminal camera are obtained; the terminal obtains a plane rotation angle around an X axis according to the third X value, the fourth X value, the resolution of the terminal and a second view angle in the current X direction of the terminal camera; the terminal obtains a plane rotation angle around a Y axis according to the third Y value, the fourth Y value, the resolution of the terminal and a second view angle in the current Y direction of the terminal camera; and the terminal calculates a first rotation matrix in a first relative position relation between the terminal camera and the vehicle-mounted camera according to the plane rotation angle around the X axis, the plane rotation angle around the Y axis and the plane rotation angle around the Z axis in the initial rotation matrix.
5. A terminal comprising a memory and a processor connected to each other, wherein the memory stores a computer program for implementing the control method of the binocular vision application of any one of claims 1-3 when executed by the processor.
CN201911197375.0A 2019-11-27 2019-11-27 Control method and terminal for binocular vision application Active CN112866629B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911197375.0A CN112866629B (en) 2019-11-27 2019-11-27 Control method and terminal for binocular vision application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911197375.0A CN112866629B (en) 2019-11-27 2019-11-27 Control method and terminal for binocular vision application

Publications (2)

Publication Number Publication Date
CN112866629A CN112866629A (en) 2021-05-28
CN112866629B true CN112866629B (en) 2024-06-21

Family

ID=75996008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911197375.0A Active CN112866629B (en) 2019-11-27 2019-11-27 Control method and terminal for binocular vision application

Country Status (1)

Country Link
CN (1) CN112866629B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188665A (en) * 2019-05-28 2019-08-30 北京百度网讯科技有限公司 Image processing method, device and computer equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160663A (en) * 2015-08-24 2015-12-16 深圳奥比中光科技有限公司 Method and system for acquiring depth image
CN106529495B (en) * 2016-11-24 2020-02-07 腾讯科技(深圳)有限公司 Obstacle detection method and device for aircraft
CN107688174A (en) * 2017-08-02 2018-02-13 北京纵目安驰智能科技有限公司 A kind of image distance-finding method, system, storage medium and vehicle-mounted visually-perceptible equipment
CN109345593B (en) * 2018-09-04 2022-04-26 海信集团有限公司 Camera posture detection method and device
CN109615652B (en) * 2018-10-23 2020-10-27 西安交通大学 Depth information acquisition method and device
CN109708655A (en) * 2018-12-29 2019-05-03 百度在线网络技术(北京)有限公司 Air navigation aid, device, vehicle and computer readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188665A (en) * 2019-05-28 2019-08-30 北京百度网讯科技有限公司 Image processing method, device and computer equipment

Also Published As

Publication number Publication date
CN112866629A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN107818288B (en) Sign board information acquisition method and device
CN109165606B (en) Vehicle information acquisition method and device and storage medium
CN110967024A (en) Method, device, equipment and storage medium for detecting travelable area
CN112270718B (en) Camera calibration method, device, system and storage medium
WO2022222658A1 (en) Groove depth measurement method, apparatus and system, and laser measurement device
CN112330756B (en) Camera calibration method and device, intelligent vehicle and storage medium
CN115827906B (en) Target labeling method, target labeling device, electronic equipment and computer readable storage medium
CN111652942B (en) Calibration method of camera module, first electronic equipment and second electronic equipment
CN115902882A (en) Collected data processing method and device, storage medium and electronic equipment
CN107330867B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN108447146B (en) Shooting direction deviation detection method and device
CN109784234B (en) Right-angled bend identification method based on forward fisheye lens and vehicle-mounted equipment
CN111127541B (en) Method and device for determining vehicle size and storage medium
CN112595728A (en) Road problem determination method and related device
CN112991439B (en) Method, device, electronic equipment and medium for positioning target object
CN112866629B (en) Control method and terminal for binocular vision application
CN113587938B (en) Vehicle positioning method and device and storage medium
CN111107271B (en) Shooting method and electronic equipment
CN109685850B (en) Transverse positioning method and vehicle-mounted equipment
CN113808209A (en) Positioning identification method and device, computer equipment and readable storage medium
CN113518171A (en) Image processing method, device, terminal equipment and medium
CN118067077A (en) Longitudinal distance measurement method and device for contralateral vehicle and computer equipment
CN109375232B (en) Distance measuring method and device
CN115180018B (en) Method, device, equipment and storage medium for measuring steering wheel rotation angle
CN118115544A (en) Vehicle motion state determining method and device based on forward-looking monocular camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 233000 building 4, national financial incubation Industrial Park, 17 Yannan Road, high tech Zone, Bengbu City, Anhui Province

Applicant after: Dafu Technology (Anhui) Co.,Ltd.

Address before: 518000 the first, second and third floors of 101 and A4 in the third industrial zone A1, A2 and A3 of Shajing Industrial Company, Ho Xiang Road, Shajing street, Bao'an District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN TATFOOK TECHNOLOGY Co.,Ltd.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant