CN111307170B - Unmanned vehicle driving planning method, device, equipment and medium - Google Patents

Unmanned vehicle driving planning method, device, equipment and medium Download PDF

Info

Publication number
CN111307170B
CN111307170B CN201811520538.XA CN201811520538A CN111307170B CN 111307170 B CN111307170 B CN 111307170B CN 201811520538 A CN201811520538 A CN 201811520538A CN 111307170 B CN111307170 B CN 111307170B
Authority
CN
China
Prior art keywords
signal
image data
target vehicle
synchronous
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811520538.XA
Other languages
Chinese (zh)
Other versions
CN111307170A (en
Inventor
程乐丹
程晓鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN201811520538.XA priority Critical patent/CN111307170B/en
Publication of CN111307170A publication Critical patent/CN111307170A/en
Application granted granted Critical
Publication of CN111307170B publication Critical patent/CN111307170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method, a device, equipment and a medium for planning the running of an unmanned vehicle. The method comprises the following steps: receiving an intelligent driving instruction sent by a target vehicle, and acquiring positioning information of the target vehicle; enabling a binocular camera of a target vehicle to acquire synchronous image data in a preset monitoring range; determining a dynamic receiving range of the target vehicle according to the positioning information of the target vehicle, and acquiring shared image data in the dynamic receiving range from a cloud database; acquiring synchronous monitoring data generated after decompression and inverse transformation are carried out on shared image data; and planning the vehicle running of the target vehicle according to the synchronous image data and the synchronous monitoring data. According to the invention, the rapid transmission of shared data in the region is realized, early warning judgment is facilitated to be made in time, emergencies around the vehicle are effectively processed in time, and the safety of the unmanned vehicle is improved.

Description

Unmanned vehicle driving planning method, device, equipment and medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to a method, a device, equipment and a medium for planning the running of an unmanned vehicle.
Background
Advanced Driver Assistance Systems (ADAS) for unmanned vehicles, sensors are a key component of unmanned vehicles and provide a central controller with various vehicle image data of the vehicle in front of the vehicle, behind and to the side. Providing a source of vehicle data may include: radar, ultrasonic wave, laser and camera, etc. wherein the data that obtains through the camera can observe the change in the dynamic driving process in real time, and especially because its own low cost, information content gather greatly, resolution ratio have had higher level and received very welcome.
At present, image data acquired by driving of an unmanned vehicle on a road with the help of an advanced driving assistance system is only from a single vehicle body, and emergency events are processed more singly, such as traffic accidents caused by the fact that effective visual information of the two parties cannot be acquired in time between vehicles or between the vehicles and moving obstacles; meanwhile, the image data volume and the information volume acquired within the single-view-angle lens range are small for vehicles, the judgment on roads is single, early warning judgment cannot be made through timely communication, and therefore continuous optimization is needed.
Disclosure of Invention
Embodiments of the present invention provide a method, an apparatus, a device, and a medium for planning the driving of an unmanned vehicle, which can efficiently transmit synchronous image data and share synchronous monitoring data with other vehicles during the unmanned driving of a target vehicle, thereby controlling the target vehicle to implement unmanned driving according to the synchronous image data and the synchronous health data, and simultaneously, timely processing an emergency according to the synchronous image data and the synchronous health data, thereby improving the safety of the unmanned vehicle.
A method of unmanned vehicle driving planning, comprising:
receiving an intelligent driving instruction sent by a target vehicle, and acquiring positioning information of the target vehicle;
acquiring synchronous image data in a preset monitoring range acquired by a binocular camera of the target vehicle;
determining a dynamic receiving range of the target vehicle according to the positioning information of the target vehicle, and acquiring shared image data in the dynamic receiving range from a cloud database; the shared image data refers to image data which are synchronously transmitted to the cloud database by all vehicles in the current time period;
acquiring synchronous monitoring data generated after the shared image data is subjected to decompression and inverse transformation;
and planning vehicle running of the target vehicle according to the synchronous image data and the synchronous monitoring data.
An unmanned vehicle driving planning apparatus comprising:
the positioning module is used for receiving an intelligent driving instruction sent by a target vehicle and acquiring positioning information of the target vehicle;
the acquisition module is used for acquiring synchronous image data in a preset monitoring range acquired by a binocular camera of the target vehicle;
the receiving module is used for determining the dynamic receiving range of the target vehicle according to the positioning information of the target vehicle and acquiring shared image data in the dynamic receiving range from a cloud database; the shared image data refers to image data which are synchronously transmitted to the cloud database by all vehicles in the current time period;
the decompression module is used for acquiring synchronous monitoring data generated after the shared image data is subjected to decompression inverse transformation;
and the planning module is used for planning the vehicle running of the target vehicle according to the synchronous image data and the synchronous monitoring data.
A computer device comprising a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, the processor implementing the above-described unmanned vehicle driving planning method when executing the computer readable instructions.
A computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, implement the above-described unmanned vehicle driving planning method.
According to the unmanned vehicle driving planning method, the device, the equipment and the medium, the synchronous image data acquired by the binocular camera of the target vehicle and the synchronous monitoring data in the dynamic receiving range of the target vehicle are acquired, and the vehicle driving planning is carried out by combining the synchronous image data of the target vehicle and the synchronous monitoring data in the dynamic receiving range of the target vehicle, so that the acquired image data volume and the acquired information volume are richer; the invention can efficiently transmit the synchronous image data and share the synchronous monitoring data with other vehicles in the unmanned driving process of the target vehicle, thereby controlling the target vehicle to realize the unmanned driving according to the synchronous image data and the synchronous health data.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of a method for planning the driving of an unmanned vehicle according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for planning the movement of an unmanned vehicle in accordance with an embodiment of the present invention;
FIG. 3 is a flowchart of step S20 of the unmanned vehicle driving planning method in accordance with an embodiment of the present invention;
FIG. 4 is a flowchart illustrating step S203 of a method for planning the driving of an unmanned vehicle according to an embodiment of the present invention;
FIG. 5 is a flow chart of a compressed forward transform of a method for planning the travel of an unmanned vehicle in accordance with an embodiment of the present invention;
FIG. 6 is a flowchart illustrating step S40 of a method for planning the operation of an unmanned vehicle according to an embodiment of the present invention;
FIG. 7 is a flow chart of the inverse decompression transform of the unmanned vehicle travel planning method in an embodiment of the present invention;
FIG. 8 is a schematic block diagram of an unmanned vehicle movement planning apparatus in accordance with an embodiment of the present invention;
FIG. 9 is a functional block diagram of an acquisition module of the unmanned vehicle travel planning apparatus in accordance with an embodiment of the present invention;
FIG. 10 is a schematic diagram of a computer device in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The unmanned vehicle driving planning method provided by the invention can be applied to the application environment shown in fig. 1, wherein the unmanned vehicle is communicated with a server through a network. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
In an embodiment, as shown in fig. 2, a method for planning the driving of an unmanned vehicle is provided, which is described by taking the server in fig. 1 as an example, and includes the following steps:
and S10, receiving an intelligent driving instruction sent by a target vehicle, and acquiring the positioning information of the target vehicle.
The intelligent driving instruction refers to an instruction that an owner of an unmanned vehicle (i.e., a target vehicle) inputs an operating parameter and input destination position information on a control panel set by the unmanned vehicle, and triggers a preset automatic driving mode button or a preset manual driving mode button to be sent to a server by clicking or sliding and the like.
Preferably, when receiving an intelligent driving instruction sent by a target vehicle, the positioning information of the target vehicle is obtained through a positioning system provided in the unmanned vehicle, at this time, the positioning information and the destination location information may be labeled in a preset electronic map, and a plurality of initial travel routes planned by the electronic map are received, so that a preferred route (such as a parameter with the fastest time, the lowest charge, the shortest distance, or the like) among the plurality of travel routes is automatically selected, the unmanned vehicle is made to operate according to the preferred route by using an operation parameter, and then the preferred route is adjusted according to the image data obtained in real time in S40.
And S20, acquiring synchronous image data in a preset monitoring range acquired by the binocular camera of the target vehicle.
The binocular camera comprises binocular cameras which are arranged in the front and back, the left and right sides, the upper left and right lower sides and the upper right and left lower sides of the unmanned vehicle, and each binocular camera is provided with two single cameras and each single camera collects all the paths of image data in the monitoring range of the binocular camera. That is, the synchronized image data refers to image data of the front and rear sides, the left and right sides, the upper left and right lower sides, the upper right and lower left and lower right, and the upper left and lower right, of the unmanned vehicle synchronously and parallelly acquired by the binocular cameras arranged at the front and rear sides, the left and right sides, the upper left and lower right, and the like of the unmanned vehicle. The synchronization image data includes, but is not limited to, environmental image data around the vehicle, and pedestrian volume data, surrounding vehicle information, and the like extracted from the environmental image data around the vehicle.
Specifically, when the unmanned vehicle (i.e., the target vehicle) is not started, the binocular camera is in a closed or energy-saving state, and when the unmanned vehicle is started, the binocular camera is awakened, namely, the binocular camera is automatically switched to a working state from the energy-saving state or the closed state, and the binocular camera is enabled to capture synchronous image data according to a preset image acquisition period (namely, the binocular camera is periodically acquired once every preset time length).
S30, determining the dynamic receiving range of the target vehicle according to the positioning information of the target vehicle, and acquiring shared image data in the dynamic receiving range from a cloud database; the shared image data refers to image data which are synchronously transmitted to the cloud database by all vehicles in the current time period.
Wherein the dynamic receiving range is an acceptable range for determining synchronous monitoring data according to the positioning information of the target vehicle, and the dynamic receiving range is set according to requirements, such as: and taking the target vehicle as a center, and acquiring a circular area range within 2 kilometers as a dynamic receiving range.
The cloud database is used for storing synchronous image data acquired by all vehicles with the same sharing authority through the binocular camera and providing a regional and time-sharing query function. It is appreciated that in one embodiment, each vehicle enterprise of an unmanned vehicle may create a respective cloud database, and each vehicle enterprise may need to provide sharing rights for all unmanned vehicles under the name.
Further, after the synchronous image data of the target vehicle is compressed, the synchronous image data may be transmitted to a cloud database in communication connection with the server, and after the compressed synchronous image data (marked as shared image data after being compressed) of the target vehicle is transmitted to the cloud database, if the target vehicle is in a dynamic receiving range of another vehicle in the current time period, the synchronous image data (compressed and marked as shared image data before being transmitted to the cloud server) of the target vehicle transmitted to the cloud server may be used as synchronous monitoring data of the other vehicle (after the shared image data is called from the cloud server, the synchronous monitoring data is generated by performing inverse decompression and transformation); if the target vehicle moves from the first location to the second location within the current time period, the first location is within the dynamic receiving range of the second location, at this time, after the synchronous image data captured at the first location is transmitted to the cloud database, the synchronous image data of the target vehicle at the first location may be used as the synchronous monitoring data of the second location, but the synchronous image data of the target vehicle at the first location may be directly obtained from the cloud database, or may be directly obtained from the database of the target vehicle (the synchronous image data of the target vehicle at the first location is stored in the database in advance, and the speed of extracting the synchronous image data from the database is faster).
Preferably, when the unmanned vehicle is started, time information (a current time point and a current time period of the current time point) of a target vehicle is acquired, an inquiry instruction is generated according to the time information and the positioning information, the inquiry instruction is sent to the cloud database, and shared image data transmitted back by the cloud database is received. The time information is a current time point and a current time period corresponding to the current time point.
And S40, acquiring synchronous monitoring data generated after the shared image data is subjected to decompression and inverse transformation.
In this embodiment, after the shared image data corresponding to the time information and the positioning information is automatically acquired from the cloud database according to the time information and the positioning information, the acquired shared image data is decompressed and then transmitted to the target vehicle. It can be understood that the shared image data stored in the cloud database is compressed synchronous image data, and when the shared image data is transmitted to a vehicle, the shared image data needs to be decompressed to obtain data (i.e., synchronous monitoring data) that can be used by the vehicle.
And S50, performing vehicle driving planning on the target vehicle according to the synchronous image data and the synchronous monitoring data.
Wherein the vehicle driving plan comprises route control and vehicle control; the route control includes lane change, route planning/adjustment, etc.; the vehicle control includes vehicle acceleration, vehicle turning around, vehicle starting, vehicle stopping, and the like.
It can be understood that the shared image data of the vehicles in a dynamic receiving range (a close range) has time correlation and space correlation, and by taking each vehicle as a central node, the image data of different vehicles in the dynamic receiving range in continuous time can be selected to make a more reasonable vehicle driving plan.
For example, after front road image data is acquired through a binocular camera arranged right in front of a target vehicle, the distance between the target vehicle and the front vehicle is judged according to the front road image data, and a short-distance deceleration early warning is sent out; further, the distance between the target vehicle and the front vehicle is obtained in real time, and when the fact that the distance between the target vehicle and the front vehicle is smaller than a preset deceleration distance threshold value is detected, the target vehicle is automatically controlled to decelerate.
For example, in the case of low visibility in extreme weather (such as fog, dust and sand), the traffic light signal is grabbed from a binocular camera arranged right in front of the target vehicle and voice prompt is carried out. Further, when the traffic light is green according to the traffic light signal, the maximum passing probability that the vehicle can pass through the traffic light is obtained according to the obtained information of the front road image data, the current running parameters of the vehicle, the distance between the vehicle and the traffic light and the like, and when the maximum passing probability is detected to be smaller than a preset green light deceleration threshold value, the target vehicle is automatically controlled to decelerate.
In summary, according to the unmanned vehicle driving planning method provided by the invention, the synchronous image data of the target vehicle and the shared image data around the target vehicle are obtained at the same time, and the vehicle driving planning is performed by combining the synchronous image data of the target vehicle and the shared image data around the target vehicle, so that the obtained image data volume and information volume are richer, the processing of emergent events is facilitated, and the early warning judgment can be made by timely communicating with the server.
In an embodiment, as shown in fig. 3, the step S20 specifically includes the following steps:
s201, enabling multiple groups of binocular cameras to acquire the synchronous image data synchronously in a multipath manner; the binocular cameras comprise binocular cameras arranged in the front and back, the left and right sides, the upper left and lower right sides and the upper right and lower left sides of the target vehicle, and each binocular camera comprises two single cameras;
that is, the binocular cameras installed at the front and rear, the left and right sides, the upper left and right lower sides, and the upper right and lower left of the target vehicle are all a set of binocular cameras, and the multiple sets of binocular cameras are enabled to synchronously acquire the multi-path synchronous image data in parallel, so that the real-time effectiveness of the image data can be ensured.
S202, storing the synchronous image data acquired by each binocular camera into a two-dimensional point matrix; the two-dimensional point matrix is a two-dimensional matrix described by pixel points.
That is, each single camera of the binocular cameras set by the target vehicle can be used as an acquisition node to acquire synchronous image data of each acquisition node respectively, and the synchronous image data is stored by using a two-dimensional point matrix.
S203, after the two-dimensional point matrix is compressed and transformed, a two-dimensional component matrix is obtained; the two-dimensional component matrix refers to a two-dimensional matrix described by non-zero-dimensional components.
The compression forward transformation is used for converting synchronous image data collected by the binocular camera into shared image data and comprises row transformation, column transformation, high-low frequency component transformation and component replacement.
In this embodiment, after the synchronous image data is stored by using the two-dimensional dot matrix in step S202, the two-dimensional dot matrix is subjected to row transformation and column transformation, so that four dimensional components corresponding to the synchronous image data can be obtained, the highest component is removed, and the remaining three dimensional components are retained as effective dimensional components.
And S204, marking the two-dimensional component matrix as shared image data, and transmitting the shared image data to the cloud database.
In summary, according to the unmanned vehicle driving planning method provided by the invention, the synchronous image data on all the acquisition nodes are acquired, the synchronous image data corresponding to each acquisition node is stored by using the two-dimensional point matrix, the two-dimensional point matrix is compressed and converted to obtain the shared image data, and the shared image data is transmitted to the cloud database, so that the transmission energy consumption can be saved, and the transmission efficiency is improved.
In an embodiment, as shown in fig. 4, the step S203 specifically includes the following steps;
s2031, performing line transformation by using the two-dimensional point matrix as the original signal, and acquiring a line-transformed signal containing a first detail signal and a first approximation signal.
Illustratively, the compression forward transform is shown in FIG. 5.
Firstly, splitting the line pixels in the original signal to obtain a first odd signal and a first even signal. That is, the splitting stage is entered to split the original signal f (t) into a first odd signal xo according to the sampling interval τm(t) and a first even signal xem(t) represented by the following formula (1):
Figure GDA0003275547680000091
and secondly, predicting the first odd-numbered signal according to a preset first predictor to obtain a first detail signal. I.e. into the prediction phase, keeping the first even signal unchanged, by the first predictor Pm[·]To predict the first odd signal and define the difference between the predicted value and the actual value as the first detail signal dm(t) of (d). As shown in the following formula (2):
dm(t)=xom(t)-Pm[xem(t)] (2)
finally, the first even number signal is updated according to a preset first updating operator and the first detail signalAnd updating the signal to obtain a first approximation signal. That is, entering the update phase, first, the first update operator U is introducedm[·]Updating the original first even signal with said first detail signal to obtain a first approximation signal cm(t) represented by the following formula (3):
cm(t)=xem(t)+Um[dm(t)] (3)
preferably, when the original signal is transformed, the first detail signal is a high-frequency component; the first approximation signal is a low frequency component, and wherein the first predictor Pm[·]Is xem(t), i.e. the first even signal itself, the first update operator Um[·]Is dm(t), the first detail signal itself.
S2032, performing column conversion on the row-converted signal to obtain a column-converted signal containing a second detail signal and a second approximation signal.
Specifically, the column pixels in the row-transformed signal are split to obtain a second odd-numbered signal yon(t) and a second even signal yen(t); using a preset second predictor Pn[·]For the second odd signal yon(t) predicting to obtain a second detail signal dn(t) and updating the operator U with a preset second update operator Un[·]And a second detail signal dn(t) for a second even signal yen(t) updating to obtain a second approximation signal cn(t) of (d). Preferably, when the line-transformed signal is line-transformed, the second detail signal is a high-frequency component, the second approximation signal is a low-frequency component, and the second predictor Pn[·]Is yen(t), i.e. the second even signal itself, the update operator Un[·]Is dn(t), i.e. the second detail signal itself.
For example, if a part of columns of pixels in a two-dimensional dot matrix form an array
Figure GDA0003275547680000111
Wherein y is1(t0) Is the even signal of the first term of the first column, y2(t0) Is the even signal of the first term of the second column, y1(t1) Is the odd signal of the first term of the first column, y2(t1) Odd signals of the first term of the second column; at this time, the array H is subjected to column transformation to obtain
Figure GDA0003275547680000112
Wherein c is1(t1) And d1(t1) According to y respectively1(t0) And y1(t1) Approximation signal and detail signal of first column first term obtained by column conversion, c2(t2) And d2(t2) Are respectively according to y2(t0) And y2(t1) And performing column transformation to obtain an approximation signal and a detail signal of the first term of the second column.
S2033, performing component conversion and component replacement on the second detail signal and the second approximation signal in the signals after the column transformation to obtain a two-dimensional component matrix.
That is, the pixel points described by the detail signal and the approximation signal are described by high and low frequency components, and the highest one of the dimensional components (high and low frequency components) is replaced by a zero component.
Understandably, after row transformation and column transformation processing are carried out on each pixel point in the synchronous image data, high-frequency components (values) and low-frequency components (values) of the rows and columns of the synchronous image data are obtained, and in the areas with higher spatial approximation degree, the smaller the high-frequency coefficient value (the inverse number of the high-frequency component) is, the higher the high-frequency coefficient value can be discarded; on the contrary, the lower the approximation degree is, the high frequency coefficient value can be selectively retained, and at the moment, different compression effects can be obtained by adapting different predictors, updating operators and selectively discarding different high frequency coefficients.
Illustratively, if part of the pixels in a two-dimensional dot matrix is an array
Figure GDA0003275547680000113
Wherein x1(t0) First pixel point of first line, x1(t1) Is the second pixel of the first row, x2(t0) Is the first pixel point of the second row, x2(t1) The second pixel point of the second row, at this time, the step of the transformation process of the array a is as follows:
step 1: the array A is transformed to obtain an array
Figure GDA0003275547680000114
Wherein, c1(t1) And d1(t1) Are respectively according to x1(t0) And x1(t1) A first approximation signal of the first term and a first detail signal of the first term obtained by line transformation, c2(t2) And d2(t2) Are respectively according to x2(t0) And x2(t1) Performing line transformation to obtain a first approximation signal of a second term and a second detail signal of the second term; and a first approximation signal c of the first term1(t1) And a first detail signal d of the first term1(t1) Will be the even signal in step 2 (column transform); first approximation signal c of second term2(t2) And a first detail signal d of a second term2(t2) Will be the odd signal in step 2 (column transform).
Step 2: the array A' is subjected to column transformation to obtain an array
Figure GDA0003275547680000121
Wherein, c1c(t1) And d1c(t1) Are respectively according to f1(t0) And f1(t1) Second approximation signal of the first term and second detail signal of the first term obtained after the column transformation, c2d(t2) And d2d(t2) Are respectively according to f2(t0) And f2(t1) A second approximation signal of a second term and a second detail signal of the second term are obtained after the column transformation is carried out;
and step 3: will array A ″)Component conversion is carried out to obtain an array
Figure GDA0003275547680000122
Wherein L-L is a low-low frequency component, H-L is a high-low frequency component, L-H is a low-high frequency component, and H-H is a high-high frequency component;
and 4, step 4: component replacement of the array B (i.e., high-high frequency components are replaced by 0) can obtain an array
Figure GDA0003275547680000123
In an embodiment, as shown in fig. 6, the step S40 specifically includes the following steps:
s401, acquiring the two-dimensional component matrix corresponding to the shared image data.
The shared image data is a two-dimensional component matrix (or image data) described by high and low frequency components (dimension components) obtained after compression and forward transformation.
Understandably, when other vehicles transmit the shared image data described by the high-frequency and low-frequency components to the target vehicle, the shared image data needs to be restored into synchronous monitoring data through decompression and inverse transformation (namely, a two-dimensional point matrix described by pixel points).
S402, acquiring component signals including the second detail signal and the second approximation signal according to the two-dimensional component matrix.
That is, the component signal is similar to a column-transformed signal obtained by subjecting an original signal to row transformation and column transformation.
S403, inverse line transforming the component signals to obtain inverse line transformed signals including the second even-numbered signals and the second odd-numbered signals.
Illustratively, the decompression forward inverse transform is shown in fig. 8.
First, the second even signal is obtained according to the second update operator, the second detail signal and the second approximation signal. That is, first, the second update operator U is introducedn[·]Using the sum of known second detail signals of said component signalsThe second approximation signal obtains the original second even signal y' en(t) of (d). As shown in the following formula (4):
y′en(t)=cn(t)-Un[dn(t)] (4)
then, a second odd signal of the row of pixels is obtained according to the second predictor, the second even signal and the second approximation signal. That is, the acquisition of the second predictor P is introducedn[·]Obtaining an original second odd signal y' o using the known second detail signal in the component signal and the obtained second even signaln(t) of (d). As shown in the following formula (5):
y′on(t)=dn(t)+Pn[y′en(t)] (5)
s404, performing column inverse transformation on the signals subjected to the row inverse transformation to obtain a two-dimensional point matrix containing the first even-numbered signal and the first odd-numbered signal, and marking the two-dimensional point matrix as the synchronous monitoring data.
Specifically, the first detail signal and the first approximation signal before column inverse transformation are obtained according to the second even signal and the second odd signal in the signals after row inverse transformation, and then the first detail signal and the first approximation signal are obtained according to the first update operator Um[·]Obtaining an original first even signal x' e from the first detail signal and the first approximation signalm(t) according to said first predictor Pm[·]Obtaining the original first odd signal x' o from the first even signal and the first approximation signalm(t) further, the first even signal x' e obtained by the component signal being subjected to inverse row transform and inverse column transformm(t) and a first odd signal x' om(t) combining to obtain an original signal y' (t), namely the two-dimensional point matrix.
In summary, the unmanned vehicle driving planning method provided by the invention converts the synchronous image into the shared image data by compression forward conversion, transmits the shared image data to the cloud database, and obtains the shared image data in the dynamic receiving range from the cloud database to convert the shared image data into the synchronous monitoring data by decompression reverse conversion, so that the unmanned vehicle can rapidly transmit and share the image data in the respective visual field range at the crowded crossroads and in the areas with more people flow, thereby realizing timely and effective processing of the vehicle body environment and improving the safety of the unmanned vehicle.
In one embodiment, as shown in fig. 9, there is provided an unmanned vehicle driving planning apparatus, which corresponds to the unmanned vehicle driving planning method in the above embodiment one to one. The unmanned vehicle driving planning apparatus includes a positioning module 110, an acquisition module 120, a receiving module 130, and a planning module 140. The functional modules are explained in detail as follows:
the positioning module 110 is configured to receive an intelligent driving instruction sent by a target vehicle, and acquire positioning information of the target vehicle.
And the acquisition module 120 is configured to acquire synchronous image data within a preset monitoring range acquired by a binocular camera of the target vehicle.
The receiving module 130 is configured to determine a dynamic receiving range of the target vehicle according to the positioning information of the target vehicle, and acquire shared image data within the dynamic receiving range from a cloud database; the shared image data refers to image data which are synchronously transmitted to the cloud database by all vehicles in the current time period.
A decompression module 140, configured to obtain synchronous monitoring data generated after performing inverse decompression and transformation on the shared image data.
And the planning module 150 is configured to plan vehicle driving of the target vehicle according to the synchronous image data and the synchronous monitoring data.
In an embodiment, as shown in fig. 9, the collection module 120 of the unmanned vehicle driving planning apparatus includes the following sub-modules, each of which is described in detail as follows:
the synchronous acquisition submodule 121 is configured to enable multiple groups of binocular cameras to acquire the synchronous image data in a multipath synchronous manner; the binocular camera contains the setting and is in the binocular camera of the front and back, the left and right sides, upper left right side below and upper right left side below of target vehicle, and each the binocular camera contains two single cameras.
The storage submodule 122 is configured to store the synchronous image data acquired by each binocular camera into a two-dimensional dot matrix; the two-dimensional point matrix is a two-dimensional matrix described by pixel points.
The compression submodule 123 is configured to obtain a two-dimensional component matrix after performing compression forward transform on the two-dimensional point matrix; the two-dimensional component matrix refers to a two-dimensional matrix described by non-zero-dimensional components.
The transmitting sub-module 124 is configured to mark the two-dimensional component matrix as shared image data, and transmit the shared image data to the cloud database.
In one embodiment, the unmanned vehicle driving planning device, the compression submodule includes the following units, and each functional unit is described in detail as follows:
and the line transformation unit is used for performing line transformation on the two-dimensional point matrix serving as the original signal to obtain a line-transformed signal containing a first detail signal and a first approximation signal.
And the column conversion unit is used for performing column conversion on the row-converted signals to obtain column-converted signals containing second detail signals and second approximation signals.
And the component processing unit is used for carrying out component conversion and component replacement on the second detail signal and the second approximation signal in the signals after the column transformation to obtain a two-dimensional component matrix.
In an embodiment, the unmanned vehicle driving planning device, the row transformation unit includes the following sub-units, and each functional sub-unit is described in detail as follows:
and the first splitting subunit is used for splitting the line pixels in the original signal to acquire a first odd signal and a first even signal.
And the first prediction subunit is used for predicting the first odd-numbered signal according to a preset first predictor to acquire a first detail signal.
And the first updating subunit is used for updating the first even number signal according to a preset first updating operator and the first detail signal to acquire a first approximation signal.
In one embodiment, the unmanned vehicle driving planning device, the column transformation unit includes the following sub-units, and each functional sub-unit is described in detail as follows:
and the second splitting subunit is used for splitting the column pixels in the row-transformed signal to obtain a second odd signal and a second even signal.
And the second prediction subunit is used for predicting the second odd-numbered signal according to a preset second predictor to acquire a second detail signal.
And the second updating subunit is used for updating the second even number signal according to a preset second updating operator and the second detail signal to acquire a second approximation signal.
In one embodiment, the unmanned vehicle driving planning apparatus, the receiving module 130 includes the following sub-modules, and each of the sub-modules is described in detail as follows:
a data acquisition sub-module for acquiring the two-dimensional component matrix corresponding to the shared image data.
And the signal extraction submodule is used for acquiring component signals containing the second detail signal and the second approximation signal according to the two-dimensional component matrix.
And the first inverse transformation submodule is used for performing inverse row transformation on the component signals to obtain signals after inverse row transformation, wherein the signals comprise the second even-numbered signals and the second odd-numbered signals.
And the second inverse transformation submodule is used for performing column inverse transformation on the signals subjected to the row inverse transformation to obtain a two-dimensional point matrix containing the first even-numbered signals and the first odd-numbered signals, and marking the two-dimensional point matrix as the synchronous monitoring data.
For specific limitations of the unmanned vehicle driving planning apparatus, reference may be made to the above limitations of the unmanned vehicle driving planning method, which are not described herein again. All or part of each module in the unmanned vehicle driving planning device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer readable instructions, and a database. The internal memory provides an environment for the operating system and execution of computer-readable instructions in the non-volatile storage medium. The computer readable instructions, when executed by a processor, implement a method for unmanned vehicle travel planning.
In one embodiment, a computer device is provided, comprising a memory, a processor, and computer readable instructions stored on the memory and executable on the processor, the processor when executing the computer readable instructions implementing the steps of:
receiving an intelligent driving instruction sent by a target vehicle, and acquiring positioning information of the target vehicle;
acquiring synchronous image data in a preset monitoring range acquired by a binocular camera of the target vehicle;
determining a dynamic receiving range of the target vehicle according to the positioning information of the target vehicle, and acquiring shared image data in the dynamic receiving range from a cloud database; the shared image data refers to image data which are synchronously transmitted to the cloud database by all vehicles in the current time period;
acquiring synchronous monitoring data generated after the shared image data is subjected to decompression and inverse transformation;
and planning vehicle running of the target vehicle according to the synchronous image data and the synchronous monitoring data.
In one embodiment, a computer readable storage medium is provided having computer readable instructions stored thereon which, when executed by a processor, perform the steps of:
receiving an intelligent driving instruction sent by a target vehicle, and acquiring positioning information of the target vehicle;
acquiring synchronous image data in a preset monitoring range acquired by a binocular camera of the target vehicle;
determining a dynamic receiving range of the target vehicle according to the positioning information of the target vehicle, and acquiring shared image data in the dynamic receiving range from a cloud database; the shared image data refers to image data which are synchronously transmitted to the cloud database by all vehicles in the current time period;
acquiring synchronous monitoring data generated after the shared image data is subjected to decompression and inverse transformation;
and planning vehicle running of the target vehicle according to the synchronous image data and the synchronous monitoring data.
It will be understood by those of ordinary skill in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a non-volatile computer readable storage medium, and when executed, can include processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), Direct Rambus Dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of each functional unit or module is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units or modules according to requirements, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (9)

1. A method for planning the driving of an unmanned vehicle, comprising:
receiving an intelligent driving instruction sent by a target vehicle, and acquiring positioning information of the target vehicle;
acquiring synchronous image data in a preset monitoring range acquired by a binocular camera of the target vehicle;
storing the synchronous image data acquired by each binocular camera into a two-dimensional point matrix; the two-dimensional point matrix is a two-dimensional matrix described by pixel points;
after the two-dimensional point matrix is subjected to compression forward transformation, a two-dimensional component matrix is obtained; the two-dimensional component matrix is a two-dimensional matrix described by using non-zero dimension components;
marking the two-dimensional component matrix as shared image data, and transmitting the shared image data to a cloud database;
determining a dynamic receiving range of the target vehicle according to the positioning information of the target vehicle, and acquiring shared image data in the dynamic receiving range from the cloud database; the shared image data refers to image data which are synchronously transmitted to the cloud database by all vehicles in the current time period;
acquiring synchronous monitoring data generated after the shared image data is subjected to decompression and inverse transformation;
performing vehicle driving planning on the target vehicle according to the synchronous image data and the synchronous monitoring data;
after the two-dimensional point matrix is subjected to compression forward transformation, a two-dimensional component matrix is obtained, which comprises the following steps:
performing row transformation on the two-dimensional point matrix serving as an original signal to obtain a row-transformed signal containing a first detail signal and a first approximation signal;
performing column transformation on the row-transformed signal to obtain a column-transformed signal containing a second detail signal and a second approximation signal;
and performing component conversion and component replacement on the second detail signal and the second approximation signal in the signals after the column transformation to obtain a two-dimensional component matrix.
2. The unmanned vehicle driving planning method of claim 1, wherein the acquiring of the synchronous image data within the preset monitoring range collected by the binocular camera of the target vehicle comprises:
enabling a plurality of groups of binocular cameras to acquire the synchronous image data in a multipath synchronous manner; the binocular camera contains the setting and is in the binocular camera of the front and back, the left and right sides, upper left right side below and upper right left side below of target vehicle, and each the binocular camera contains two single cameras.
3. The unmanned vehicle driving planning method of claim 1, wherein the performing line transformation on the two-dimensional point matrix as the original signal to obtain a line-transformed signal including a first detail signal and a first approximation signal comprises:
splitting the row pixels in the original signal to obtain a first odd signal and a first even signal;
predicting the first odd-numbered signal according to a preset first predictor to obtain a first detail signal;
and updating the first even number signal according to a preset first updating operator and the first detail signal to obtain a first approximation signal.
4. The unmanned vehicle driving planning method of claim 3 wherein said performing a column transformation on said row transformed signals to obtain column transformed signals comprising second detail signals and second approximation signals comprises:
splitting the column pixels in the row-transformed signals to obtain a second odd signal and a second even signal;
predicting the second odd-numbered signal according to a preset second predictor to obtain a second detail signal;
and updating the second even number signal according to a preset second updating operator and the second detail signal to obtain a second approximation signal.
5. The unmanned vehicle driving planning method of claim 4, wherein said obtaining the synchronized monitoring data generated after said decompressing inverse transformation of the shared image data comprises:
obtaining the two-dimensional component matrix corresponding to the shared image data;
acquiring a component signal comprising the second detail signal and the second approximation signal according to the two-dimensional component matrix;
performing inverse row transform on the component signals to obtain inverse row transformed signals including the second even-numbered signals and the second odd-numbered signals;
and performing column inverse transformation on the signals subjected to the row inverse transformation to obtain a two-dimensional point matrix containing the first even-numbered signal and the first odd-numbered signal, and marking the two-dimensional point matrix as the synchronous monitoring data.
6. An unmanned vehicle driving planning apparatus, comprising:
the positioning module is used for receiving an intelligent driving instruction sent by a target vehicle and acquiring positioning information of the target vehicle;
the acquisition module is used for acquiring synchronous image data in a preset monitoring range acquired by a binocular camera of the target vehicle; the acquisition module comprises:
the storage submodule is used for storing the synchronous image data acquired by each binocular camera into a two-dimensional point matrix; the two-dimensional point matrix is a two-dimensional matrix described by pixel points;
the compression submodule is used for obtaining a two-dimensional component matrix after performing compression forward transformation on the two-dimensional point matrix; the two-dimensional component matrix is a two-dimensional matrix described by using non-zero dimension components;
after the two-dimensional point matrix is subjected to compression forward transformation, a two-dimensional component matrix is obtained, which comprises the following steps: performing row transformation on the two-dimensional point matrix serving as an original signal to obtain a row-transformed signal containing a first detail signal and a first approximation signal; performing column transformation on the row-transformed signal to obtain a column-transformed signal containing a second detail signal and a second approximation signal; performing component conversion and component replacement on the second detail signal and the second approximation signal in the column-transformed signal to obtain a two-dimensional component matrix;
the transmission submodule is used for marking the two-dimensional component matrix as shared image data and transmitting the shared image data to a cloud database;
the receiving module is used for determining the dynamic receiving range of the target vehicle according to the positioning information of the target vehicle and acquiring shared image data in the dynamic receiving range from the cloud database; the shared image data refers to image data which are synchronously transmitted to the cloud database by all vehicles in the current time period;
the decompression module is used for acquiring synchronous monitoring data generated after the shared image data is subjected to decompression inverse transformation;
and the planning module is used for planning the vehicle running of the target vehicle according to the synchronous image data and the synchronous monitoring data.
7. The unmanned vehicle driving planning apparatus of claim 6 wherein said acquisition module comprises:
the synchronous acquisition submodule is used for enabling a plurality of groups of binocular cameras to acquire the synchronous image data synchronously in a multipath manner; the binocular camera contains the setting and is in the binocular camera of the front and back, the left and right sides, upper left right side below and upper right left side below of target vehicle, and each the binocular camera contains two single cameras.
8. A computer device comprising a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, wherein the processor when executing the computer readable instructions implements the unmanned vehicle driving planning method of any of claims 1 to 5.
9. A computer readable storage medium storing computer readable instructions, wherein the computer readable instructions, when executed by a processor, implement the method of unmanned vehicle driving planning of any of claims 1-5.
CN201811520538.XA 2018-12-12 2018-12-12 Unmanned vehicle driving planning method, device, equipment and medium Active CN111307170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811520538.XA CN111307170B (en) 2018-12-12 2018-12-12 Unmanned vehicle driving planning method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811520538.XA CN111307170B (en) 2018-12-12 2018-12-12 Unmanned vehicle driving planning method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN111307170A CN111307170A (en) 2020-06-19
CN111307170B true CN111307170B (en) 2022-03-18

Family

ID=71157967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811520538.XA Active CN111307170B (en) 2018-12-12 2018-12-12 Unmanned vehicle driving planning method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN111307170B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115631478A (en) * 2022-12-02 2023-01-20 广汽埃安新能源汽车股份有限公司 Road image detection method, device, equipment and computer readable medium

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150235094A1 (en) * 2014-02-17 2015-08-20 General Electric Company Vehicle imaging system and method
CN102663894B (en) * 2012-05-20 2014-01-01 杭州妙影微电子有限公司 Road traffic condition foreknowing system and method based on internet of things
CN102737509A (en) * 2012-06-29 2012-10-17 惠州天缘电子有限公司 Method and system for realizing image information sharing based on internet of vehicles
CN203149847U (en) * 2012-12-12 2013-08-21 华创车电技术中心股份有限公司 Road condition sharing server
CN104715603A (en) * 2013-12-13 2015-06-17 中兴通讯股份有限公司 Road monitoring method, photographic device, car-mounted terminal and system
US20160048714A1 (en) * 2013-12-27 2016-02-18 Empire Technology Development Llc Data collection scheme
KR20160064653A (en) * 2014-11-28 2016-06-08 현대모비스 주식회사 Apparatus and method for guiding driving route using photographic image
KR101741433B1 (en) * 2015-06-09 2017-05-30 엘지전자 주식회사 Driver assistance apparatus and control method for the same
WO2017063201A1 (en) * 2015-10-16 2017-04-20 华为技术有限公司 Road traffic information sharing method
CN105989712A (en) * 2015-11-06 2016-10-05 乐卡汽车智能科技(北京)有限公司 Vehicle data processing method and vehicle terminal
CN107883977A (en) * 2016-09-29 2018-04-06 法乐第(北京)网络科技有限公司 A kind of information sharing method, system and automobile
CN108462728A (en) * 2017-02-17 2018-08-28 中兴通讯股份有限公司 A kind of method and device, the vehicle mobile terminals of on-vehicle information processing
CN106971583A (en) * 2017-03-27 2017-07-21 宁波吉利汽车研究开发有限公司 A kind of traffic information shared system and sharing method based on vehicle-mounted networking equipment
CN107301668B (en) * 2017-06-14 2019-03-15 成都四方伟业软件股份有限公司 A kind of picture compression method based on sparse matrix, convolutional neural networks
CN107613014A (en) * 2017-09-29 2018-01-19 联想(北京)有限公司 A kind of data sharing method, system, terminal, storage medium and automobile
CN108766007A (en) * 2018-08-02 2018-11-06 成都秦川物联网科技股份有限公司 Road conditions alarm method based on car networking and car networking system

Also Published As

Publication number Publication date
CN111307170A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
US10229590B2 (en) System and method for improved obstable awareness in using a V2X communications system
US10349011B2 (en) System and method for improved obstacle awareness in using a V2X communications system
US10613547B2 (en) System and method for improved obstacle awareness in using a V2X communications system
US20200346662A1 (en) Information processing apparatus, vehicle, mobile object, information processing method, and program
CN108574929B (en) Method and apparatus for networked scene rendering and enhancement in an onboard environment in an autonomous driving system
DE102020129456A1 (en) TRAJECTORY PREDICTION FROM A PRECALCULATED OR DYNAMICALLY GENERATED BANK OF TRAJECTORIES
DE102017126877A1 (en) Automated copilot control for autonomous vehicles
US20180281815A1 (en) Predictive teleassistance system for autonomous vehicles
DE102018106353A1 (en) TEMPORARY DATA ASSIGNMENTS FOR OPERATING AUTONOMOUS VEHICLES
DE112016007429T5 (en) Remote operating system, transport system and remote operating procedure
DE102019108644A1 (en) METHOD AND DEVICE FOR AUTOMATIC LEARNING OF RULES FOR AUTONOMOUS DRIVING
US11453410B2 (en) Reducing processing requirements for vehicle control
KR102476931B1 (en) MERGING LiDAR INFORMATION AND CAMERA INFORMATION
CA3136909A1 (en) Systems and methods for simultaneous localization and mapping using asynchronous multi-view cameras
CN111307170B (en) Unmanned vehicle driving planning method, device, equipment and medium
CN110789515B (en) System and method for hardware validation in a motor vehicle
US20220250656A1 (en) Systems and methods for vehicular-network-assisted federated machine learning
US20230032009A1 (en) Shared tile map with live updates
US20220244068A1 (en) Dynamic map generation with focus on construction and localization field of technology
CN113312403B (en) Map acquisition method and device, electronic equipment and storage medium
US20230230423A1 (en) Physical and virtual identity association
US11546503B1 (en) Dynamic image compression for multiple cameras of autonomous vehicles
US11790665B2 (en) Data driven dynamically reconfigured disparity map
US11644846B2 (en) System and method for real-time lane validation
CN117022319A (en) Vehicle control method, device, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant