CN110188665B - Image processing method and device and computer equipment - Google Patents

Image processing method and device and computer equipment Download PDF

Info

Publication number
CN110188665B
CN110188665B CN201910452227.2A CN201910452227A CN110188665B CN 110188665 B CN110188665 B CN 110188665B CN 201910452227 A CN201910452227 A CN 201910452227A CN 110188665 B CN110188665 B CN 110188665B
Authority
CN
China
Prior art keywords
image
vehicle
terminal equipment
camera
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910452227.2A
Other languages
Chinese (zh)
Other versions
CN110188665A (en
Inventor
左雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN201910452227.2A priority Critical patent/CN110188665B/en
Publication of CN110188665A publication Critical patent/CN110188665A/en
Application granted granted Critical
Publication of CN110188665B publication Critical patent/CN110188665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera

Abstract

The application provides an image processing method, an image processing device and vehicle-mounted equipment, wherein the method is applied to the vehicle-mounted equipment and comprises the following steps: the method comprises the steps of obtaining a first image collected by terminal equipment, inquiring a second image matched with a timestamp carried by the first image from an image collected by vehicle-mounted equipment, aligning corresponding pixels in the first image and the second image according to a pixel point mapping relation between the image collected by the terminal equipment and the image collected by the vehicle-mounted equipment, and controlling a vehicle according to the first image and the second image after the pixels are aligned. According to the method, the terminal equipment and the vehicle-mounted equipment are combined to form the double cameras, and then the pixels of the corresponding pixels in the first image and the second image which are matched with the acquired timestamps are aligned, so that the aligned first image and the aligned second image have stereoscopic vision characteristics, and the technical problems that in the prior art, a two-dimensional image acquired by a single camera does not have the visual characteristics, and the accuracy is low when a vehicle is controlled are solved.

Description

Image processing method and device and computer equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and a computer device.
Background
With the development of science and technology and the continuous improvement of the living standard of people, vehicles become the preferred transportation means for every family. People have to face the trouble caused by traffic accidents while enjoying the convenience and quickness brought by vehicles.
In the prior art, the vehicle-mounted equipment is arranged inside the vehicle, and the obstacles, road conditions and the like around the vehicle are detected through the vehicle-mounted equipment so as to reduce the occurrence of traffic accidents. However, in order to reduce the vehicle cost, the existing vehicle-mounted device usually adopts a relatively cheap monocular camera, which causes a technical problem that the accuracy of detecting the surrounding condition of the vehicle by the vehicle-mounted device is low.
Disclosure of Invention
The application provides an image processing method, an image processing device and computer equipment, wherein a terminal device and vehicle-mounted equipment are combined together to form double cameras, and then pixel alignment is carried out on corresponding pixels in a first image and a second image which are matched with a collected timestamp, so that the first image and the second image after the pixel alignment have certain stereoscopic vision characteristics, and the technical problem that in the prior art, a two-dimensional image collected by a single camera does not have visual characteristics, so that the precision is low when a vehicle is controlled is solved.
An embodiment of a first aspect of the present application provides an image processing method, which is applied to a vehicle-mounted device, and the method includes:
acquiring a first image acquired by terminal equipment;
inquiring a second image with a timestamp matched with the timestamp carried by the first image from the images acquired by the vehicle-mounted equipment;
and aligning the pixels of the corresponding pixels in the first image and the second image according to the pixel mapping relation between the image collected by the terminal equipment and the image collected by the vehicle-mounted equipment, so as to control the vehicle according to the first image and the second image after the pixels are aligned.
As a first possible implementation manner of the present application, the querying a second image with a timestamp matched with a timestamp carried by the first image includes:
determining a timestamp range according to a timestamp carried by the first image;
and querying a second image with a timestamp within the timestamp range.
As a second possible implementation manner of the present application, the querying, after the second image with the timestamp in the timestamp range, further includes:
and re-executing the step of acquiring the first image acquired by the terminal equipment if the second image is not inquired within the set inquiry duration.
As a third possible implementation manner of the present application, the pixel mapping relationships are multiple groups, and the pixel alignment is performed on corresponding pixels in the first image and the second image according to the pixel mapping relationship between the image acquired by the terminal device and the image acquired by the vehicle-mounted device, so that before vehicle control is performed on the first image and the second image after the pixel alignment, the method further includes:
determining a relative position in the first image of a region of a vehicle present in the first image;
determining the setting position of the terminal equipment in the vehicle according to the relative position;
and inquiring the pixel point mapping relation corresponding to the position of the terminal equipment in the multi-group pixel point mapping relation.
As a fourth possible implementation manner of the present application, before determining the relative position of the vehicle region represented in the first image, the method further includes:
arranging the terminal device at a plurality of positions in the vehicle;
and calibrating the terminal equipment camera and the vehicle-mounted equipment camera at each position to obtain a pixel point mapping relation corresponding to each position.
As a fifth possible implementation manner of the present application, before aligning each pixel point in the first image with each pixel point in the second image according to a pixel point mapping relationship between the image acquired by the terminal device and the image acquired by the vehicle-mounted device, so as to perform vehicle control according to the first image and the second image after the pixel points are aligned, the method further includes:
acquiring a rotation matrix and a translation matrix which respectively correspond to the terminal equipment camera and the vehicle-mounted equipment camera;
and calculating to obtain the pixel point mapping relation according to the rotation matrix and the translation matrix respectively corresponding to the terminal equipment camera and the vehicle-mounted equipment camera.
As a sixth possible implementation manner of the present application, the obtaining of the rotation matrix and the translation matrix corresponding to the terminal device camera and the vehicle-mounted device camera respectively includes:
respectively acquiring at least three third images acquired by the terminal equipment camera and at least three fourth images acquired by the vehicle-mounted equipment camera; the third image and the fourth image are shot for the same checkerboard with alternate black and white, and carry timestamps for shooting corresponding images;
respectively measuring a first coordinate set of the black and white checkerboard block intersection points in the terminal equipment camera coordinate system and a second coordinate set of the checkerboard black and white checkerboard block intersection points in the third image and the fourth image in the vehicle-mounted equipment camera coordinate system;
rotating and/or translating each coordinate in the first coordinate set until the coordinate is aligned with a corresponding coordinate in a chessboard coordinate system to obtain a rotation matrix and/or a translation matrix corresponding to the terminal equipment camera;
and rotating and/or translating each coordinate in the second coordinate set until the coordinate is aligned with the corresponding coordinate in the chessboard coordinate system to obtain a rotation matrix and/or a translation matrix corresponding to the vehicle-mounted equipment camera.
According to the image processing method, the first image acquired by the terminal equipment is acquired; inquiring a second image with a carried timestamp matched with a timestamp carried by the first image from the images collected by the vehicle-mounted equipment; and aligning the pixel points of corresponding pixels in the first image and the second image according to the pixel point mapping relation between the image acquired by the terminal equipment and the image acquired by the vehicle-mounted equipment, so as to control the vehicle according to the first image and the second image after the pixel points are aligned. According to the method, the terminal equipment and the vehicle-mounted equipment are combined to form the double cameras, and then the pixel points of the corresponding pixels in the first image and the second image which are matched with the acquired timestamps are aligned, so that the first image and the second image which are aligned with the pixel points have certain stereoscopic vision characteristics, and the technical problem that in the prior art, the two-dimensional image acquired by the single camera does not have the visual characteristics, so that the vehicle is controlled with low precision is solved.
An embodiment of a second aspect of the present application provides an image processing apparatus, applied to an on-vehicle device, including:
the acquisition module is used for acquiring a first image acquired by the terminal equipment;
the first query module is used for querying a second image which carries a timestamp matched with a timestamp carried by the first image from the images acquired by the vehicle-mounted equipment;
and the alignment module is used for aligning the pixel points of corresponding pixels in the first image and the second image according to the pixel point mapping relation between the image acquired by the terminal equipment and the image acquired by the vehicle-mounted equipment, so as to control the vehicle according to the first image and the second image after the pixel points are aligned.
The image processing device of the embodiment of the application acquires a first image acquired by terminal equipment; inquiring a second image with a carried timestamp matched with a timestamp carried by the first image from the images collected by the vehicle-mounted equipment; and aligning the pixel points of corresponding pixels in the first image and the second image according to the pixel point mapping relation between the image acquired by the terminal equipment and the image acquired by the vehicle-mounted equipment, so as to control the vehicle according to the first image and the second image after the pixel points are aligned. According to the method, the terminal equipment and the vehicle-mounted equipment are combined to form the double cameras, and then the pixel points of the corresponding pixels in the first image and the second image which are matched with the acquired timestamps are aligned, so that the first image and the second image which are aligned with the pixel points have certain stereoscopic vision characteristics, and the technical problem that in the prior art, the two-dimensional image acquired by the single camera does not have the visual characteristics, so that the vehicle is controlled with low precision is solved.
An embodiment of a third aspect of the present application provides a computer device, including: comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image processing method as proposed by the above embodiments of the present application when executing the program.
A fourth aspect of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements an image processing method as set forth in the above-mentioned embodiments of the present application.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another image processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
FIG. 5 illustrates a block diagram of an exemplary computer device suitable for use in implementing embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
An image processing method, an apparatus, and a computer device according to embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure.
The embodiment of the present application is exemplified in that the image processing method is configured in an image processing apparatus, and the image processing apparatus can be applied to any computer device, so that the computer device can execute an image processing function.
The Computer device may be a Personal Computer (PC), a cloud device, a terminal device, and the like, and the terminal device may be a hardware device having various operating systems, touch screens and/or display screens, and cameras, such as a mobile phone, a tablet Computer, a Personal digital assistant, a wearable device, and an in-vehicle device.
As shown in fig. 1, the image processing method may include the steps of:
step 101, acquiring a first image acquired by a terminal device.
In the embodiment of the application, the terminal device is arranged in the vehicle, and the close-range communication connection between the terminal device and the vehicle-mounted device is established, so that after the terminal device collects the first image, the vehicle-mounted device can obtain the first image collected by the terminal device through the established close-range communication connection.
Wherein the close range communication connection may be a wired or bluetooth connection.
It should be noted that, after the terminal device acquires the first image, the terminal device may zoom the first image, so as to facilitate transmission of the first image. And the first image carries a timestamp when the terminal device acquires the image, wherein the timestamp is used for uniquely identifying the moment when the terminal device acquires the first image.
And 102, inquiring a second image with a timestamp matched with the timestamp carried by the first image from the images collected by the vehicle-mounted equipment.
In the embodiment of the application, in the process of acquiring the first image by the terminal equipment, the camera of the vehicle-mounted equipment acquires images of the same shot object from different positions, and the images acquired by the vehicle-mounted equipment also carry timestamps. Therefore, from the images acquired by the in-vehicle device, the second image carrying a timestamp matching the timestamp carried by the first image can be queried.
In one scenario, when a second image with a timestamp matching a timestamp carried by a first image is queried, there may be a case where the second image matching the timestamp carried by the first image is not queried from an image acquired by an on-board device for a long time, and in this case, in order to implement real-time processing of the image, a query duration may be set so as to control a query time for querying the second image matching the timestamp carried by the first image within the query duration.
Specifically, the timestamp range can be determined within the left and right time ranges of the timestamp according to the acquired timestamp carried by the first image acquired by the terminal device, and then the second image of which the timestamp is within the timestamp range is inquired from the image acquired by the vehicle-mounted device.
For example, if the first image carries a timestamp of eighteen fifteen seconds at nine am, then the timestamp range may be determined to be from eighteen ten seconds at nine am to eighteen twenty seconds at nine am. And then, inquiring the second image within the set time stamp range.
As a possible case, in the image acquired from the vehicle-mounted device, a second image carrying a timestamp matching a timestamp carried by the first image is not queried within the set query duration. In this case, the vehicle-mounted device acquires the first image acquired by the terminal device again through the near field communication connection established with the terminal device, so as to inquire a second image carrying a timestamp matched with the timestamp carried by the acquired first image from the images acquired by the vehicle-mounted device. And inquiring a second image with a carried timestamp matched with the timestamp carried by the first image from the images collected by the vehicle-mounted equipment within the set inquiry duration.
As another possible situation, a second image carrying a timestamp matched with the timestamp carried by the first image is queried from the images acquired by the vehicle-mounted device within the set query time length. In this case, step 103 is performed sequentially.
And 103, aligning the pixel points of corresponding pixels in the first image and the second image according to the pixel point mapping relation between the image acquired by the terminal equipment and the image acquired by the vehicle-mounted equipment, so as to control the vehicle according to the first image and the second image after the pixel points are aligned.
The mapping relation of the pixel points is obtained by calibrating a camera of the terminal equipment and a camera of the vehicle-mounted equipment.
It should be noted that when the positions of the terminal device in the vehicle are different, the mapping relationship of the pixel points corresponding to the positions can be obtained when the camera of the terminal device and the camera of the vehicle-mounted device are calibrated at the positions. And when the same terminal equipment is arranged at the same position in the vehicle, the terminal equipment and the vehicle-mounted equipment are used for collecting images, so that when the vehicle is controlled according to the collected images, the camera of the terminal equipment and the camera of the vehicle-mounted equipment do not need to be calibrated again.
In other words, when the position and/or angle of the terminal device in the vehicle changes, the camera of the terminal device and the camera of the vehicle-mounted device need to be calibrated again to obtain the pixel mapping relationship between the corresponding terminal device collected image and the vehicle-mounted device collected image after the position changes.
In the embodiment of the application, the pixel point mapping relation corresponding to the position of the terminal equipment is determined according to the set position of the terminal equipment in the vehicle. And aligning the pixel points of corresponding pixels in the first image and the second image according to the pixel point mapping relation between the image collected by the terminal equipment and the image collected by the vehicle-mounted equipment. Because the first image and the second image after the pixel points are aligned can provide certain stereoscopic vision characteristics, the accuracy of vehicle control and the safety of vehicle driving can be improved when the vehicle control is carried out through the first image and the second image after the pixel points are aligned.
In a possible scene, when the vehicle runs and an obstacle is detected, if the obstacle is shot by using the terminal equipment and the vehicle-mounted equipment which are arranged inside the vehicle at the same time, the obstacle is detected according to the first image and the second image after the pixel points are aligned. The method and the device can effectively identify the information of the barrier and the distance between the barrier and the running vehicle, and also solve the technical problem of low detection precision when only vehicle-mounted equipment is used for detecting the barrier, thereby avoiding the condition that the vehicle collides with the barrier.
The above-mentioned two cameras that constitute through the camera of terminal equipment and the camera of mobile unit detect the barrier, only as an example, can also realize functions such as vehicle range finding, traffic sign discernment, vehicle location, no longer give consideration to here.
According to the image processing method, the first image acquired by the terminal equipment is acquired; inquiring a second image with a carried timestamp matched with a timestamp carried by the first image from the images collected by the vehicle-mounted equipment; and aligning the pixel points of corresponding pixels in the first image and the second image according to the pixel point mapping relation between the image acquired by the terminal equipment and the image acquired by the vehicle-mounted equipment, so as to control the vehicle according to the first image and the second image after the pixel points are aligned. According to the method, the terminal equipment and the vehicle-mounted equipment are combined to form the double cameras, and then the pixel points of the corresponding pixels in the first image and the second image which are matched with the acquired timestamps are aligned, so that the first image and the second image which are aligned with the pixel points have certain stereoscopic vision characteristics, and the technical problem that in the prior art, the two-dimensional image acquired by the single camera does not have the visual characteristics, so that the vehicle is controlled with low precision is solved.
In one possible case in the embodiment of the application, the pixel mapping relationships between the images acquired by the terminal device and the images acquired by the vehicle-mounted device are multiple groups, and the pixel mapping relationships between the images acquired by the terminal device and the images acquired by the vehicle-mounted device corresponding to each position can be obtained by arranging the terminal device at multiple positions in the vehicle. And then determining the set position of the terminal equipment in the vehicle so as to query the pixel point mapping relation corresponding to the position of the terminal equipment in the multi-group pixel point mapping relation.
The above process is described in detail with reference to fig. 2, and fig. 2 is a schematic flow chart of another image processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the image processing method may include the steps of:
step 201, terminal devices are arranged at a plurality of positions in the vehicle.
The existing binocular camera arranged in the vehicle is formed by integrating two cameras, and the position between the two cameras is fixed, so that the range of the shot image is limited.
In the embodiment of the application, when the terminal equipment camera and the vehicle-mounted equipment camera form the double cameras, the terminal equipment can be arranged at a plurality of positions in the vehicle. From this, can set up in the inside position of vehicle through adjustment terminal equipment, obtain the two cameras that satisfy different needs for terminal equipment camera and mobile unit camera constitute two cameras and can realize more functions.
Step 202, calibrating the terminal equipment camera and the vehicle-mounted equipment camera at each position to obtain a pixel point mapping relation corresponding to each position.
In the embodiment of the application, the terminal equipment is arranged at a plurality of positions in the vehicle, and the terminal equipment cameras at the positions and the vehicle-mounted equipment cameras form double cameras. Due to the fact that the structure and assembly precision of the terminal equipment camera and the vehicle-mounted equipment camera are limited, the formed double cameras cannot achieve a perfect imaging effect. Under the condition, the terminal equipment cameras and the vehicle-mounted equipment cameras at all positions need to be calibrated so as to eliminate distortion and obtain an internal and external parameter matrix.
The external parameter matrix of the camera comprises a rotation matrix and a translation matrix of the camera.
It should be noted that when two cameras are used to collect images, the two cameras need to be placed in parallel, but in fact, this is difficult to achieve, so that the terminal device camera and the vehicle-mounted device camera need to be subjected to stereo correction to obtain a rotation matrix and a translation matrix corresponding to the two cameras respectively. And then, calculating to obtain a pixel mapping relation according to the rotation matrix and the translation matrix respectively corresponding to the terminal equipment camera and the vehicle-mounted equipment camera.
The detailed process of calibrating the terminal device camera and the vehicle-mounted device camera to obtain the rotation matrix and the translation matrix respectively corresponding to the terminal device camera and the vehicle-mounted device camera can be referred to the following embodiments, which are not repeated herein.
The following illustrates how to calculate the pixel mapping relationship according to the rotation matrix and the translation matrix respectively corresponding to the terminal device camera and the vehicle-mounted device camera.
If the terminal equipment is arranged on the left side of the vehicle-mounted equipment, the rotation matrix and the translation matrix which correspond to the camera of the terminal equipment and the camera of the vehicle-mounted equipment are respectively R through calibrating the camera of the terminal equipment and the camera of the vehicle-mounted equipmentl、Tl、Rr、Tr. In this case, the terminalWhen the pixel points of the image collected by the equipment are mapped to the pixel points of the image collected by the vehicle-mounted equipment, the corresponding pixel point mapping relation can be obtained by calculating according to the following formula (1):
Figure BDA0002075499650000071
T=Tr-RTl
r, T in the formula (1) is a rotation matrix and a matrix translation matrix when the pixel points of the image collected by the terminal device are mapped to the pixel points of the image collected by the vehicle-mounted device.
Similarly, the mapping relationship of the pixel points of the images acquired by the vehicle-mounted device and mapped to the images acquired by the terminal device can be calculated through the rotation matrix and the translation matrix corresponding to the camera of the terminal device and the camera of the vehicle-mounted device, and the description is omitted here.
It should be noted that when the positions of the terminal device in the vehicle are different, and the camera of the terminal device and the camera of the vehicle-mounted device are calibrated at each position, the pixel mapping relationship corresponding to each position can be obtained, and the pixel mapping relationship corresponding to each position is stored, so as to query the corresponding pixel mapping relationship according to the position of the terminal device. And when the same terminal equipment is arranged at the same position in the vehicle, after the camera of the terminal equipment and the camera of the vehicle-mounted equipment are calibrated, the terminal equipment and the vehicle-mounted equipment are used for collecting images, so that when the vehicle control is carried out according to the collected images, the camera of the terminal equipment and the camera of the vehicle-mounted equipment do not need to be calibrated again.
Step 203, acquiring a first image acquired by the terminal device.
In the embodiment of the present application, the implementation process of step 203 may refer to the implementation process of step 101 in the foregoing embodiment, and is not described herein again.
In step 204, the relative position of the area of the vehicle present in the first image is determined.
In this embodiment, since the terminal device is disposed inside the vehicle, the vehicle area may be present in the first image collected by the terminal device. And when the terminal device is arranged at different positions in the vehicle, the vehicle areas presented in the first image are different.
Specifically, after a first image acquired by the terminal device is acquired, the vehicle area may be identified by an image identification method, and then the relative position of the vehicle area in the first image, which appears in the first image, may be determined.
And step 205, determining the setting position of the terminal device in the vehicle according to the relative position.
In the embodiment of the application, the terminal equipment is arranged at different positions in the vehicle, and the relative positions of the vehicle areas in the first image are different in the collected first image. Therefore, the setting position of the terminal device inside the vehicle can be determined according to the relative position of the vehicle region in the first image.
Step 206, in the mapping relationship of the multiple groups of pixel points, the mapping relationship of the pixel points corresponding to the position of the terminal device is inquired.
In the embodiment of the application, when the terminal equipment is arranged at different positions in the vehicle, the camera of the terminal equipment and the camera of the vehicle-mounted equipment are calibrated at each position, and the pixel point mapping relation corresponding to each position can be obtained. Therefore, after the setting position of the terminal equipment in the vehicle is determined, the pixel point mapping relation corresponding to the position of the terminal equipment can be obtained by inquiring in the multi-group pixel point mapping relation.
And step 207, inquiring a second image with a timestamp matched with the timestamp carried by the first image from the images collected by the vehicle-mounted equipment.
And 208, aligning the pixel points of corresponding pixels in the first image and the second image according to the pixel point mapping relation between the image acquired by the terminal equipment and the image acquired by the vehicle-mounted equipment, and controlling the vehicle according to the first image and the second image after the pixel points are aligned.
In the embodiment of the present application, the implementation processes of step 207 and step 208 refer to the implementation processes of step 102 and step 103 in the above embodiment, and are not described herein again.
The image processing method of the embodiment of the application comprises the steps of arranging the terminal equipment at a plurality of positions in a vehicle, calibrating the camera of the terminal equipment and the camera of the vehicle-mounted equipment at each position to obtain a pixel point mapping relation corresponding to each position, acquiring a first image acquired by the terminal equipment, determining the relative position of a vehicle region in the first image, determining the arrangement position of the terminal equipment in the vehicle according to the relative position, inquiring the pixel point mapping relation corresponding to the position of the terminal equipment in the multi-group pixel point mapping relation, inquiring a second image with a carried timestamp matched with the timestamp carried by the first image from the image acquired by the vehicle-mounted equipment, aligning the pixel points of corresponding pixels in the first image and the second image according to the pixel point mapping relation between the image acquired by the terminal equipment and the image acquired by the vehicle-mounted equipment, and controlling the vehicle according to the first image and the second image after the pixel points are aligned. Therefore, the terminal equipment is arranged at the position inside the vehicle, the pixel point mapping relation corresponding to the position is inquired, and the pixel point alignment is carried out on the corresponding pixels in the first image and the second image according to the pixel point mapping relation, so that the first image and the second image after the pixel point alignment have certain stereoscopic vision characteristics, and the accuracy of the visual algorithm is improved.
As a possible implementation manner, a zhangying calibration method may be adopted to calibrate the terminal device camera and the vehicle-mounted device camera to obtain internal parameters and external parameters corresponding to the terminal device camera and the vehicle-mounted device camera, where the external parameters include a rotation matrix and a translation matrix corresponding to the terminal device camera and the vehicle-mounted device camera, respectively. The above process is described in detail with reference to fig. 3, and fig. 3 is a flowchart illustrating another image processing method according to an embodiment of the present application.
As shown in fig. 3, the image processing method may include the steps of:
step 301, at least three third images collected by the terminal device camera and at least three fourth images collected by the vehicle-mounted device camera are respectively obtained.
The third image and the fourth image are shot for the same checkerboard with alternate black and white, and carry timestamps of shooting corresponding images.
In the embodiment of the application, the calibration method for the terminal equipment camera and the vehicle-mounted equipment camera can be adopted to calibrate the terminal equipment camera and the vehicle-mounted equipment camera so as to obtain the internal and external parameters of the terminal equipment camera and the vehicle-mounted equipment camera.
When the camera is calibrated by adopting a Zhang Zhengyou calibration method, no additional equipment is needed, and only one piece of printed checkerboard with alternate black and white is needed. In the calibration process, the positions of the terminal equipment, the vehicle-mounted equipment and the calibration plate are not limited, the terminal equipment, the vehicle-mounted equipment and the calibration plate can be placed at any position, and the calibration precision is high.
The calibration plate is a flat plate for fixing black and white checkerboard.
Specifically, at least three third images obtained by shooting the calibration plate from different positions, different angles and different postures by the camera of the terminal equipment are obtained. And simultaneously, acquiring at least three fourth images obtained by shooting the calibration plate by the camera of the vehicle-mounted equipment from different positions, different angles and different postures at the same time.
For example, 13 images are obtained by shooting a calibration board by adjusting the position where the terminal device is installed in the vehicle, and 10 clear third images are selected from the 13 images. And simultaneously controlling a camera of the vehicle-mounted equipment to shoot the calibration plate from different positions, different angles and different postures to obtain 13 images, and selecting clear 10 fourth images from the 13 images.
Step 302, respectively measuring a first coordinate set of the intersection points of the black and white checkerboard blocks in the terminal equipment camera coordinate system and a second coordinate set of the intersection points of the black and white checkerboard blocks in the third image and the fourth image in the vehicle-mounted equipment camera coordinate system.
In the embodiment of the application, the third image and the fourth image are respectively detected through an angular point detection algorithm to detect the intersection point of the black block and the white block. And respectively measuring the coordinates of the intersection points of the black and white checkerboard blocks in the terminal equipment camera coordinate system in at least three third images, and defining the measured coordinates as a first coordinate set. And simultaneously, respectively measuring the coordinates of the intersection points of the black and white checkerboard blocks in the at least three fourth images in the camera coordinate system of the vehicle-mounted equipment, and defining the measured coordinates as a second coordinate set.
It should be noted that a corner point is generally defined as an intersection of two edges. For example, a triangle has three corners and a rectangle has four corners, which are corner points. The corner detection algorithm is a corner extraction algorithm based on a gray scale image, belongs to the prior art, and is not described herein again.
Step 303, rotating and/or translating each coordinate in the first coordinate set until the coordinate is aligned with a corresponding coordinate in a chessboard coordinate system to obtain a rotation matrix and/or a translation matrix corresponding to the terminal equipment camera; and rotating and/or translating each coordinate in the second coordinate set until the coordinate is aligned with the corresponding coordinate in the chessboard coordinate system to obtain a rotation matrix and/or a translation matrix corresponding to the vehicle-mounted equipment camera.
In this embodiment, the intersection points of the checkerboard black and white blocks in the third image and the fourth image obtained by using the corner detection algorithm respectively calculate the internal parameters and the external parameters corresponding to the terminal device camera and the vehicle-mounted device camera. The external parameters comprise a rotation matrix and a translation matrix which correspond to the terminal equipment camera and the vehicle-mounted equipment camera respectively.
Specifically, each coordinate in the first coordinate set is rotated and/or translated respectively until the coordinate is aligned with a corresponding coordinate in the chessboard coordinate system, so that a rotation matrix and/or a translation matrix corresponding to the terminal equipment camera are obtained. And simultaneously, respectively rotating and/or translating each coordinate in the second coordinate set until the coordinates are aligned with corresponding coordinates in the chessboard coordinate system, so as to obtain a rotation matrix and/or a translation matrix corresponding to the vehicle-mounted equipment camera.
Furthermore, distortion correction can be respectively carried out on the acquired third image and the acquired fourth image through a Bouguet method, and then binocular correction is carried out on the two images. The effect of coplanarity in the same row is achieved by transforming the left image plane and the right image plane, and the calculation complexity of stereo matching is reduced.
In the embodiment of the application, the camera of the terminal equipment and the camera of the vehicle-mounted equipment are calibrated by a Zhang-friend calibration method so as to obtain a rotation matrix and a translation matrix corresponding to the camera of the terminal equipment and the camera of the vehicle-mounted equipment respectively. Therefore, when the calibrated terminal equipment camera and the vehicle-mounted equipment camera are used for collecting images, binocular image data can be obtained, so that the collected images have stereoscopic vision characteristics, and the technical problem that in the prior art, two-dimensional images collected by a single camera do not have visual characteristics and cannot accurately control vehicles is solved.
In order to implement the above embodiments, the present application also provides an image processing apparatus.
Fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 4, the image processing apparatus 100 includes: an acquisition module 110, a first query module 120, and an alignment module 130.
The obtaining module 110 is configured to obtain a first image collected by a terminal device.
The first query module 120 is configured to query, from the images acquired by the vehicle-mounted device, a second image with a timestamp matching a timestamp carried by the first image.
And the alignment module 130 is configured to perform pixel alignment on corresponding pixels in the first image and the second image according to a pixel mapping relationship between the terminal device collected image and the vehicle-mounted device collected image, so as to perform vehicle control according to the first image and the second image after the pixel alignment.
As a possible implementation manner, the first query module 120 further includes:
and the determining unit is used for determining the time stamp range according to the time stamp carried by the first image.
And the inquiring unit is used for inquiring the second image of which the time stamp is in the time stamp range.
As another possible implementation manner, the first query module 120 further includes:
and the execution module is used for re-executing the step of acquiring the first image acquired by the terminal equipment if the second image is not inquired within the set inquiry duration.
As another possible implementation manner, the image processing apparatus 100 further includes:
a first determination module for determining a relative position in the first image of a vehicle area present in the first image;
the second determination module is used for determining the setting position of the terminal equipment in the vehicle according to the relative position;
and the second query module is used for querying the pixel point mapping relation corresponding to the position of the terminal equipment in the multi-group pixel point mapping relation.
As another possible implementation manner, the image processing apparatus 100 further includes:
the terminal equipment comprises a setting module, a processing module and a control module, wherein the setting module is used for setting the terminal equipment at a plurality of positions in the vehicle;
and the calibration module is used for calibrating the terminal equipment camera and the vehicle-mounted equipment camera at each position so as to obtain a pixel point mapping relation corresponding to each position.
As another possible implementation manner, the image processing apparatus 100 further includes:
the second acquisition module is used for acquiring a rotation matrix and a translation matrix which respectively correspond to the terminal equipment camera and the vehicle-mounted equipment camera;
and the calculation module is used for calculating to obtain the pixel point mapping relation according to the rotation matrix and the translation matrix which respectively correspond to the terminal equipment camera and the vehicle-mounted equipment camera.
As another possible implementation manner, the second obtaining module is specifically configured to:
respectively acquiring at least three third images acquired by the terminal equipment camera and at least three fourth images acquired by the vehicle-mounted equipment camera; the third image and the fourth image are shot for the same checkerboard with alternate black and white, and carry timestamps for shooting corresponding images;
respectively measuring a first coordinate set of the black and white checkerboard block intersection points in the terminal equipment camera coordinate system and a second coordinate set of the checkerboard black and white checkerboard block intersection points in the third image and the fourth image in the vehicle-mounted equipment camera coordinate system;
rotating and/or translating each coordinate in the first coordinate set until the coordinate is aligned with a corresponding coordinate in a chessboard coordinate system to obtain a rotation matrix and/or a translation matrix corresponding to the terminal equipment camera;
and rotating and/or translating each coordinate in the second coordinate set until the coordinate is aligned with the corresponding coordinate in the chessboard coordinate system to obtain a rotation matrix and/or a translation matrix corresponding to the vehicle-mounted equipment camera.
It should be noted that the foregoing explanation of the embodiment of the image processing method is also applicable to the image processing apparatus of the embodiment, and the implementation principle thereof is similar and will not be described herein again.
The image processing device of the embodiment of the application acquires a first image acquired by terminal equipment; inquiring a second image with a carried timestamp matched with a timestamp carried by the first image from the images collected by the vehicle-mounted equipment; and aligning the pixel points of corresponding pixels in the first image and the second image according to the pixel point mapping relation between the image acquired by the terminal equipment and the image acquired by the vehicle-mounted equipment, so as to control the vehicle according to the first image and the second image after the pixel points are aligned. According to the method, the terminal equipment and the vehicle-mounted equipment are combined to form the double cameras, and then the pixel points of the corresponding pixels in the first image and the second image which are matched with the acquired timestamps are aligned, so that the first image and the second image which are aligned with the pixel points have certain stereoscopic vision characteristics, and the technical problem that in the prior art, the two-dimensional image acquired by the single camera does not have the visual characteristics, so that the vehicle is controlled with low precision is solved.
In order to implement the foregoing embodiments, the present application also provides a computer device, including: comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image processing method as proposed by the above embodiments of the present application when executing the program.
In order to achieve the above embodiments, the present application also proposes a non-transitory computer-readable storage medium having a computer program stored thereon, characterized in that the program, when executed by a processor, implements the image processing method as proposed by the above embodiments of the present application.
FIG. 5 illustrates a block diagram of an exemplary computer device suitable for use in implementing embodiments of the present application. The computer device 12 shown in fig. 5 is only an example and should not bring any limitation to the function and scope of use of the embodiments of the present application.
As shown in FIG. 5, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Moreover, computer device 12 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via Network adapter 20. As shown, network adapter 20 communicates with the other modules of computer device 12 via bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, for example, implements the image processing method mentioned in the foregoing embodiment, by executing a program stored in the system memory 28.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. An image processing method applied to a vehicle-mounted device, the method comprising:
acquiring a first image acquired by terminal equipment;
inquiring a second image with a timestamp matched with the timestamp carried by the first image from the images acquired by the vehicle-mounted equipment;
aligning pixel points of corresponding pixels in the first image and the second image according to a pixel point mapping relation between the image acquired by the terminal equipment and the image acquired by the vehicle-mounted equipment, and controlling the vehicle according to the aligned first image and second image of the pixel points;
the terminal equipment is arranged at a plurality of positions in the vehicle, and when the terminal equipment is arranged at different positions in the vehicle, the terminal equipment camera and the vehicle-mounted equipment camera form double cameras meeting different requirements;
the method further comprises the following steps:
calibrating the terminal equipment camera and the vehicle-mounted equipment camera at each position to obtain and store a pixel point mapping relation corresponding to each position; the pixel point mapping relations are multiple groups;
wherein, according to the pixel mapping relation between the terminal device collected image and the vehicle-mounted device collected image, the pixel alignment is carried out on the corresponding pixels in the first image and the second image, so that before the vehicle control is carried out on the first image and the second image after the pixel alignment, the method further comprises the following steps:
determining a relative position in the first image of a region of a vehicle present in the first image;
determining the setting position of the terminal equipment in the vehicle according to the relative position;
and inquiring the pixel point mapping relation corresponding to the position of the terminal equipment in the multi-group pixel point mapping relation.
2. The method of claim 1, wherein querying the second image with a timestamp matching the timestamp carried by the first image comprises:
determining a timestamp range according to a timestamp carried by the first image;
and querying a second image with a timestamp within the timestamp range.
3. The method of claim 2, wherein querying the second image having the timestamp in the range of timestamps further comprises:
and re-executing the step of acquiring the first image acquired by the terminal equipment if the second image is not inquired within the set inquiry duration.
4. The method according to claim 1, wherein before aligning each pixel point in the first image with each pixel point in the second image according to a pixel point mapping relationship between the image collected by the terminal device and the image collected by the vehicle-mounted device, so as to perform vehicle control according to the first image and the second image after the pixel points are aligned, the method further comprises:
acquiring a rotation matrix and a translation matrix which respectively correspond to the terminal equipment camera and the vehicle-mounted equipment camera;
and calculating to obtain the pixel point mapping relation according to the rotation matrix and the translation matrix respectively corresponding to the terminal equipment camera and the vehicle-mounted equipment camera.
5. The method according to claim 4, wherein the obtaining of the rotation matrix and the translation matrix corresponding to the terminal device camera and the vehicle-mounted device camera respectively comprises:
respectively acquiring at least three third images acquired by the terminal equipment camera and at least three fourth images acquired by the vehicle-mounted equipment camera; the third image and the fourth image are shot for the same checkerboard with alternate black and white, and carry timestamps for shooting corresponding images;
respectively measuring a first coordinate set of the black and white checkerboard block intersection points in the terminal equipment camera coordinate system and a second coordinate set of the checkerboard black and white checkerboard block intersection points in the third image and the fourth image in the vehicle-mounted equipment camera coordinate system;
rotating and/or translating each coordinate in the first coordinate set until the coordinate is aligned with a corresponding coordinate in a chessboard coordinate system to obtain a rotation matrix and/or a translation matrix corresponding to the terminal equipment camera;
and rotating and/or translating each coordinate in the second coordinate set until the coordinate is aligned with the corresponding coordinate in the chessboard coordinate system to obtain a rotation matrix and/or a translation matrix corresponding to the vehicle-mounted equipment camera.
6. An image processing apparatus, applied to an in-vehicle device, the apparatus comprising:
the acquisition module is used for acquiring a first image acquired by the terminal equipment;
the first query module is used for querying a second image which carries a timestamp matched with a timestamp carried by the first image from the images acquired by the vehicle-mounted equipment;
the alignment module is used for aligning pixel points of corresponding pixels in the first image and the second image according to a pixel point mapping relation between the image acquired by the terminal equipment and the image acquired by the vehicle-mounted equipment, so that vehicle control is performed according to the first image and the second image after the pixel points are aligned;
wherein the terminal device is disposed at a plurality of locations inside a vehicle; when the terminal equipment is arranged at different positions in the vehicle, the terminal equipment camera and the vehicle-mounted equipment camera form double cameras meeting different requirements;
the device, still include:
the calibration module is used for calibrating the camera of the terminal equipment and the camera of the vehicle-mounted equipment at each position to obtain and store a pixel point mapping relation corresponding to each position; the pixel point mapping relations are multiple groups;
and, the apparatus, further comprising:
a first determination module for determining a relative position in the first image of a vehicle area present in the first image;
the second determination module is used for determining the setting position of the terminal equipment in the vehicle according to the relative position;
and the second query module is used for querying the pixel point mapping relation corresponding to the position of the terminal equipment in the multi-group pixel point mapping relation.
7. The apparatus of claim 6, wherein the first query module further comprises:
the determining unit is used for determining a timestamp range according to the timestamp carried by the first image;
and the inquiring unit is used for inquiring the second image with the timestamp within the timestamp range.
8. The apparatus of claim 7, wherein the first query module further comprises:
and the execution module is used for re-executing the step of acquiring the first image acquired by the terminal equipment if the second image is not inquired within the set inquiry duration.
9. The apparatus of claim 8, further comprising:
the second acquisition module is used for acquiring a rotation matrix and a translation matrix which respectively correspond to the terminal equipment camera and the vehicle-mounted equipment camera;
and the calculation module is used for calculating to obtain the pixel point mapping relation according to the rotation matrix and the translation matrix which respectively correspond to the terminal equipment camera and the vehicle-mounted equipment camera.
10. The apparatus of claim 9, wherein the second obtaining module is specifically configured to:
respectively acquiring at least three third images acquired by the terminal equipment camera and at least three fourth images acquired by the vehicle-mounted equipment camera; the third image and the fourth image are shot for the same checkerboard with alternate black and white, and carry timestamps for shooting corresponding images;
respectively measuring a first coordinate set of the black and white checkerboard block intersection points in the terminal equipment camera coordinate system and a second coordinate set of the checkerboard black and white checkerboard block intersection points in the third image and the fourth image in the vehicle-mounted equipment camera coordinate system;
rotating and/or translating each coordinate in the first coordinate set until the coordinate is aligned with a corresponding coordinate in a chessboard coordinate system to obtain a rotation matrix and/or a translation matrix corresponding to the terminal equipment camera;
and rotating and/or translating each coordinate in the second coordinate set until the coordinate is aligned with the corresponding coordinate in the chessboard coordinate system to obtain a rotation matrix and/or a translation matrix corresponding to the vehicle-mounted equipment camera.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image processing method according to any one of claims 1 to 5 when executing the program.
12. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the image processing method according to any one of claims 1 to 5.
CN201910452227.2A 2019-05-28 2019-05-28 Image processing method and device and computer equipment Active CN110188665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910452227.2A CN110188665B (en) 2019-05-28 2019-05-28 Image processing method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910452227.2A CN110188665B (en) 2019-05-28 2019-05-28 Image processing method and device and computer equipment

Publications (2)

Publication Number Publication Date
CN110188665A CN110188665A (en) 2019-08-30
CN110188665B true CN110188665B (en) 2022-02-22

Family

ID=67718363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910452227.2A Active CN110188665B (en) 2019-05-28 2019-05-28 Image processing method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN110188665B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866629A (en) * 2019-11-27 2021-05-28 深圳市大富科技股份有限公司 Binocular vision application control method and terminal
CN113733354B (en) * 2021-08-09 2023-04-25 中科云谷科技有限公司 Control method, processor and control device for mixer truck

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903101A (en) * 2012-09-06 2013-01-30 北京航空航天大学 Method for carrying out water-surface data acquisition and reconstruction by using multiple cameras
CN103522970A (en) * 2013-05-31 2014-01-22 Tcl集团股份有限公司 Vehicle driving safety detection method and system based on machine vision
CN104182982A (en) * 2014-08-27 2014-12-03 大连理工大学 Overall optimizing method of calibration parameter of binocular stereo vision camera
CN107444264A (en) * 2016-05-31 2017-12-08 法拉第未来公司 Use the object of camera calibration du vehicule
CN108016435A (en) * 2016-11-04 2018-05-11 Lg电子株式会社 Vehicle control apparatus in the car and control method for vehicle are installed
CN109166155A (en) * 2018-09-26 2019-01-08 北京图森未来科技有限公司 A kind of calculation method and device of vehicle-mounted binocular camera range error

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10599959B2 (en) * 2017-04-05 2020-03-24 International Business Machines Corporation Automatic pest monitoring by cognitive image recognition with two cameras on autonomous vehicles

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903101A (en) * 2012-09-06 2013-01-30 北京航空航天大学 Method for carrying out water-surface data acquisition and reconstruction by using multiple cameras
CN103522970A (en) * 2013-05-31 2014-01-22 Tcl集团股份有限公司 Vehicle driving safety detection method and system based on machine vision
CN104182982A (en) * 2014-08-27 2014-12-03 大连理工大学 Overall optimizing method of calibration parameter of binocular stereo vision camera
CN107444264A (en) * 2016-05-31 2017-12-08 法拉第未来公司 Use the object of camera calibration du vehicule
CN108016435A (en) * 2016-11-04 2018-05-11 Lg电子株式会社 Vehicle control apparatus in the car and control method for vehicle are installed
CN109166155A (en) * 2018-09-26 2019-01-08 北京图森未来科技有限公司 A kind of calculation method and device of vehicle-mounted binocular camera range error

Also Published As

Publication number Publication date
CN110188665A (en) 2019-08-30

Similar Documents

Publication Publication Date Title
CN110264520B (en) Vehicle-mounted sensor and vehicle pose relation calibration method, device, equipment and medium
CN110927708B (en) Calibration method, device and equipment of intelligent road side unit
US10339390B2 (en) Methods and apparatus for an imaging system
JP4803449B2 (en) On-vehicle camera calibration device, calibration method, and vehicle production method using this calibration method
JP4803450B2 (en) On-vehicle camera calibration device and vehicle production method using the device
US11880993B2 (en) Image processing device, driving assistance system, image processing method, and program
CN109828250B (en) Radar calibration method, calibration device and terminal equipment
CN110188665B (en) Image processing method and device and computer equipment
JP5228614B2 (en) Parameter calculation apparatus, parameter calculation system and program
CN110345875B (en) Calibration and ranging method, device, electronic equipment and computer readable storage medium
CN111652937B (en) Vehicle-mounted camera calibration method and device
CN114494448A (en) Calibration error evaluation method and device, computer equipment and storage medium
JP5240517B2 (en) Car camera calibration system
CN113496528B (en) Method and device for calibrating position of visual detection target in fixed traffic roadside scene
CN113763478B (en) Unmanned vehicle camera calibration method, device, equipment, storage medium and system
CN116704048B (en) Double-light registration method
CN111243021A (en) Vehicle-mounted visual positioning method and system based on multiple combined cameras and storage medium
CN111336938A (en) Robot and object distance detection method and device thereof
JP4905812B2 (en) Camera calibration device
CN112465920A (en) Vision sensor calibration method and device
CN114283177A (en) Image registration method and device, electronic equipment and readable storage medium
CN115601450B (en) Panoramic calibration method and related device, equipment, system and medium
TWI793584B (en) Mapping and localization system for automated valet parking and method thereof
JPWO2020059064A1 (en) Calculator, information processing method and program
CN111862211B (en) Positioning method, device, system, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211013

Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Apollo Zhilian (Beijing) Technology Co.,Ltd.

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant