CN117734723A - Autopilot system, autopilot vehicle and cloud device - Google Patents

Autopilot system, autopilot vehicle and cloud device Download PDF

Info

Publication number
CN117734723A
CN117734723A CN202211120020.3A CN202211120020A CN117734723A CN 117734723 A CN117734723 A CN 117734723A CN 202211120020 A CN202211120020 A CN 202211120020A CN 117734723 A CN117734723 A CN 117734723A
Authority
CN
China
Prior art keywords
mark
target position
target
vehicle
cloud device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211120020.3A
Other languages
Chinese (zh)
Inventor
孟凡星
佟源洋
王庆全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202211120020.3A priority Critical patent/CN117734723A/en
Publication of CN117734723A publication Critical patent/CN117734723A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses automatic driving system, automatic driving vehicle and high in the clouds equipment belongs to artificial intelligence technical field. The system comprises an automatic driving vehicle and cloud equipment, wherein the automatic driving vehicle is in communication connection with the cloud equipment; the automatic driving vehicle is used for determining a first mark of the target position based on the sensing data detected in real time in the driving process; the autonomous vehicle is further configured to send a tag change request to the cloud device based on a second tag of the target location stored in the autonomous vehicle being different from the first tag; the cloud device is used for receiving a mark changing request sent by the automatic driving vehicle and determining a third mark of the target position according to the position information of the target position; the cloud device is further used for sending a third mark to the automatic driving vehicle; the automatic driving vehicle is also used for receiving a third mark sent by the cloud device, and changing the mark of the target position according to the third mark. The system enables a high accuracy of the modification of the position mark.

Description

Autopilot system, autopilot vehicle and cloud device
Technical Field
The embodiment of the application relates to the technical field of artificial intelligence, in particular to an automatic driving system, an automatic driving vehicle and cloud equipment.
Background
With the rapid development of artificial intelligence technology, automatic driving technology is receiving increasingly widespread attention, and automatic driving is also called unmanned driving. The map is stored in the automatic driving equipment, the map comprises marks of all positions, and when the marks of the positions are used for recording the passing positions of the automatic driving equipment, auxiliary information which needs to be considered in decision making and planning is made, so that the traffic capacity of the automatic driving equipment is improved.
Disclosure of Invention
The embodiment of the application provides an automatic driving system, an automatic driving vehicle and cloud equipment, which can be used for solving the problems in the related art. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides an autopilot system, where the autopilot system includes an autopilot vehicle and a cloud device, where the autopilot vehicle and the cloud device are in communication connection;
the automatic driving vehicle is used for determining a first mark of a target position based on sensing data detected in real time in the driving process, and the first mark is used for indicating the existence condition of an object at the target position at the current moment;
the autonomous vehicle is further configured to send a tag change request to a cloud device based on a second tag of the target location stored in the autonomous vehicle being different from the first tag, the tag change request including at least location information of the target location, the second tag being used to indicate an object presence of the target location at a historical time;
The cloud device is used for receiving a mark changing request sent by the automatic driving vehicle, determining a third mark of the target position according to the position information of the target position, and the third mark is used for indicating the existence condition of an object at the target position at the current moment;
the cloud device is further configured to send the third tag to the autonomous vehicle;
the automatic driving vehicle is further used for receiving a third mark sent by the cloud device, and changing the mark of the target position according to the third mark.
In one possible implementation, the autonomous vehicle is configured to change the sign of the target location from the second sign to the third sign based on the third sign being different from the second sign.
In one possible implementation manner, the autonomous vehicle is configured to determine that the target location does not have an object based on sensing data detected in real time during driving, and use a target mark as a first mark of the target location;
or determining a first object located at the target position based on the sensing data detected in real time during the driving process; determining the matching degree of the first object and each candidate mark; and taking the candidate mark with the matching degree meeting the first matching requirement as a first mark of the target position.
In one possible implementation manner, the automatic driving vehicle is further used for controlling the automatic driving vehicle to pass through the target position based on that the traffic type corresponding to the mark after the target position is changed is passable;
or based on that the traffic type corresponding to the mark after the change of the target position is non-traffic, adjusting the driving route of the automatic driving vehicle to obtain an adjusted driving route, and controlling the automatic driving vehicle to drive according to the adjusted driving route, wherein the adjusted driving route does not pass through the target position.
In a possible implementation manner, the autonomous vehicle is further configured to determine an information transmission distance according to the position information of the target position; determining a first road section according to the position information of the target position and the information transmission distance, wherein the cut-off position of the first road section is the target position; and sending a first change instruction to a target vehicle positioned on the first road section, wherein the first change instruction comprises position information of the target position and the third mark, and the first change instruction is used for instructing the target vehicle to change the mark of the target position into the third mark.
In a possible implementation manner, the cloud device is configured to obtain an image of the target position according to the position information of the target position, and identify the image of the target position to obtain a third mark of the target position;
or, according to the position information of the target position, acquiring traffic information of the target position; and determining a third mark of the target position according to the traffic information of the target position.
In one possible implementation, the traffic information of the target location includes a takeover condition of a reference vehicle at the target location, the reference vehicle being a vehicle other than the autonomous vehicle;
the cloud device is used for acquiring a running state of the reference vehicle after the reference vehicle is taken over based on the taking over condition of the reference vehicle at the target position as taken over; acquiring an image of the target position based on the travel state of the reference vehicle after being taken over as detour; and identifying the image of the target position to obtain a third mark of the target position.
In a possible implementation manner, the cloud device is configured to identify an image of the target location, obtain that an object does not exist in the target location, and use a reference mark as a third mark of the target location;
Or, identifying the image of the target position to obtain a second object positioned at the target position; determining the matching degree of the second object and each candidate mark; and taking the candidate mark with the matching degree meeting the second matching requirement as a third mark of the target position.
In one possible implementation manner, the cloud device is further configured to take the reference mark as a third mark of the target position based on whether the taking-over condition of the reference vehicle at the target position is not taken over, or whether the taking-over condition of the reference vehicle at the target position is taken over, and the running state of the reference vehicle after taking over is straight.
In one possible implementation, the cloud device is further configured to determine that the driving route includes the target location and a candidate vehicle that does not pass through the target location; and sending a second change instruction to the candidate vehicle, wherein the second change instruction comprises the position information of the target position and the third mark, and the second change instruction is used for changing the mark of the target position to the third mark by the candidate vehicle.
In one possible implementation, the tag change request further includes the first tag, the tag change request being for indicating to change the tag of the target location to the first tag;
The cloud device is further configured to send a first flag change result to the autonomous vehicle based on the third flag being the same as the first flag, the first flag change result being configured to indicate that the flag of the target position is changed to the first flag;
the automatic driving vehicle is further used for receiving a first mark changing result sent by the cloud device and changing the mark of the target position from the second mark to the first mark.
In one possible implementation, the cloud device is further configured to send a second flag change result to the autonomous vehicle based on the third flag being different from the first flag, the second flag change result including the third flag, the second flag change result being used to indicate that the flag of the target location is changed to the third flag;
the automatic driving vehicle is further used for receiving a mark changing result sent by the cloud device, and changing the mark of the target position according to the third mark.
In a second aspect, an embodiment of the present application provides an autonomous vehicle, the autonomous vehicle being communicatively connected to a cloud device, the autonomous vehicle including a transceiver and a processor;
The processor is used for determining a first mark of a target position based on sensing data detected in real time in a driving process, and the first mark is used for indicating the existence condition of an object at the target position at the current moment;
the transceiver is configured to send a tag change request to the cloud device based on a second tag of the target location stored in the autonomous vehicle being different from the first tag, where the tag change request includes at least location information of the target location, and the second tag is used to indicate an object presence condition of the target location at a historical time;
the transceiver is further configured to receive a third flag sent by the cloud device, where the third flag is a flag determined by the cloud device according to the location information of the target location, and the third flag is used to indicate that an object at the target location exists at the current time;
the processor is further configured to change the mark of the target position according to the third mark.
In a third aspect, an embodiment of the present application provides a cloud device, where the cloud device is in communication connection with an autopilot vehicle, and the cloud device includes a transceiver and a processor;
The transceiver is used for receiving a mark changing request sent by the automatic driving vehicle, and the mark changing request at least comprises the position information of a target position;
the processor is used for determining a third mark of the target position according to the position information of the target position, and the third mark is used for indicating the existence condition of the object at the target position at the current moment;
the transceiver is further configured to send the third tag to the autonomous vehicle, where the third tag is used by the autonomous vehicle to alter the tag of the target location.
The technical scheme provided by the embodiment of the application at least brings the following beneficial effects:
according to the automatic driving system provided by the embodiment of the application, the mark of the target position is changed in the driving process of the automatic driving vehicle, so that the change of the mark of the target position is timely; moreover, the system enables the cloud end device to determine the third mark of the target position through interaction between the automatic driving vehicle and the cloud end device, so that the accuracy of the determined third mark is higher, the accuracy of changing the position mark can be further improved, and the driving safety of the automatic driving vehicle is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an autopilot system provided in an embodiment of the present application;
FIG. 2 is a schematic illustration of an autonomous vehicle provided in an embodiment of the present application;
fig. 3 is a schematic diagram of a cloud device according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an autopilot system provided in an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an autopilot system according to an embodiment of the present application, as shown in fig. 1, the system includes: an autonomous vehicle 101 and a cloud device 102. The autonomous vehicle 101 is communicatively connected to the cloud device 102 via a wired network or a wireless network.
The autonomous vehicle 101 is configured to determine a first marker of the target position based on perceived data detected in real time during traveling, the first marker being configured to indicate the presence of an object at the target position at the current time.
Wherein the perception data comprises point cloud data and images. The target position is a position that the autonomous vehicle 101 does not pass when traveling along the traveling route. The present embodiment does not limit the process of automatically driving the vehicle 101 to detect the perceived data of the target position in real time. Illustratively, the perceived data is point cloud data, the point cloud data of the target location is acquired through a 3D (Three-dimensional) scanning device, the 3D scanning device is in communication connection with the autonomous vehicle 101 through a wired network or a wireless network, and the 3D scanning device sends the point cloud data of the target location to the autonomous vehicle 101, so that the autonomous vehicle 101 acquires the point cloud data of the target location. Alternatively, the 3D scanning device may be a laser radar (Lidar), a Stereo Camera (Stereo Camera), a Time-Of-Flight Camera (Time-Of-Flight Camera), or other devices, which is not limited in this embodiment Of the present application.
The process of acquiring the point cloud data of the target position by the 3D scanning device comprises the following steps: the 3D scanning device measures feature information of a plurality of feature points of the target position in an automated manner, and then outputs point cloud data of the target position based on the feature information of the plurality of feature points. Optionally, the characteristic information includes at least one of coordinates, color information (Red-Green-Blue, R-G-B), and reflection Intensity information (Intensity). Of course, the feature information may also include information about each direction angle, which is not limited in the embodiment of the present application.
The coordinates may be three-dimensional coordinates, two-dimensional coordinates, or coordinates of other dimensions, which are not limited in the embodiment of the present application. Taking coordinates as three-dimensional coordinates as an example, the coordinates include coordinates in a first direction, coordinates in a second direction, and coordinates in a third direction, and each of the direction angle information includes first direction angle information, second direction angle information, and third direction angle information. The first direction, the second direction and the third direction are three different directions. Illustratively, the first direction is the X-direction, the second direction is the Y-direction, and the third direction is the Z-direction.
Taking 3D scanning equipment as a laser radar as an example, the laser radar is installed on the automatic driving vehicle 101, and the laser radar is measurement equipment integrating laser scanning and positioning and attitude determination systems. The lidar system includes a laser and a receiver. The laser is capable of generating a plurality of laser pulses, and the laser emits the generated plurality of laser pulses onto a target location. Diffuse reflection occurs after the target position receives the laser pulse, and the receiver receives the diffusely reflected laser light. The laser records a first time for emitting laser pulses to the target position, the receiver records a second time for receiving diffuse reflection laser, and the laser radar determines the propagation time of the laser pulses emitted to the target position according to the first time and the second time; and then, based on the speed of light and the propagation time, determining the characteristic information of the characteristic point of the laser pulse emitted to the target position at the target position. Because the laser emits a plurality of laser pulses to the target position at one time, the characteristic information of a plurality of characteristic points is obtained, and then the point cloud data of the target position is obtained according to the characteristic information of the plurality of characteristic points.
Optionally, the sensing data is an image, and the automatic driving vehicle 101 is provided with an image capturing device, and the image capturing device is used for capturing an image of a position to be passed in real time during the running process of the automatic driving vehicle 101. When the image capturing device captures an image of the target position, that is, the automated guided vehicle 101 captures an image of the target position. The image capturing apparatus may be any type of image capturing apparatus, and the embodiment of the present application is not limited thereto.
In one possible implementation, the process of determining the first marker of the target location by the autonomous vehicle 101 from the perceived data of the target location includes: the automatic driving vehicle 101 determines that no object exists in the target position according to the perception data of the target position, and takes the target mark as a first mark of the target position, wherein the target mark is used for indicating that no object exists in the target position; or determining a first object positioned at the target position according to the perception data of the target position, determining the matching degree of the first object and each candidate mark, and taking the candidate mark with the matching degree meeting the first matching requirement as the first mark of the target position. The candidate labels with the matching degree meeting the first matching requirement are candidate labels with the highest matching degree.
Optionally, determining the degree of matching of the first object and each candidate marker comprises: and determining a first feature vector corresponding to the first object, and determining a mark feature vector corresponding to each candidate mark respectively, wherein the vector dimensions of the first feature vector and the mark feature vector are the same. And determining the matching degree of the first object and each candidate mark according to the first feature vector corresponding to the first object and the mark feature vector corresponding to each candidate mark. In one possible implementation manner, according to a first feature vector corresponding to the first object and a mark feature vector corresponding to each candidate mark, determining a distance from the first object to each candidate mark, and taking the distance from the first object to each candidate mark as a matching degree between the first object and each candidate mark. The distance between the first object and each candidate mark may be a cosine distance between the first object and each candidate mark, or may be an euclidean distance between the first object and each candidate mark, which is not limited in the embodiment of the present application.
Optionally, the first feature vector corresponding to the first object is (X 1 ,X 2 ,…,X n ) The marker feature vector corresponding to any candidate marker is (Y 1 ,Y 2 ,…,Y n ) The distance L of the first object to any candidate mark is determined according to the following formula (1).
Illustratively, the candidate markers include candidate marker 1, candidate marker 2, candidate marker 3, and the first object has the highest degree of matching with candidate marker 1, and therefore, the first marker of the target position is determined as candidate marker 1.
The autonomous vehicle 101 is further configured to send a tag change request to the cloud device 102 based on a second tag of the target location stored in the autonomous vehicle 101 being different from the first tag.
In one possible implementation, the autonomous vehicle 101 has a map stored therein that includes second indicia for each location that indicates the presence of an object at the target location at a historical time that is prior to the current time. It follows that the second marker of the target position is stored in the automated guided vehicle 101. After the first flag of the target position is determined in the above-described process, the automated guided vehicle 101 generates a flag change request including at least the position information of the target position based on the second flag of the target position stored in the automated guided vehicle 101 being different from the first flag. And further sending a mark changing request to the cloud device 102, so that the cloud device 102 determines a third mark of the target position, where the third mark is used to indicate the existence of the object at the target position at the current moment.
Alternatively, if the second mark based on the target position stored in the automated driving vehicle 101 is the same as the first mark, the presence of the object indicating the target position is not changed, that is, the mark change request does not need to be generated.
Illustratively, the first flag of the target position is 1, the second flag of the target position is 0, and since the first flag of the target position and the second flag of the target position are different, the automated driving vehicle 101 generates a flag change request, and sends the flag change request to the cloud device 102.
For another example, the first mark of the target position is 1, and the second mark of the target position is 1, and since the first mark of the target position and the second mark of the target position are the same, it is unnecessary to generate a mark change request, and it is unnecessary to perform the subsequent steps.
The cloud device 102 is configured to receive a tag modification request sent by the autopilot vehicle 101, and determine a third tag of the target location according to the location information of the target location.
In one possible implementation, after receiving the tag change request sent by the autopilot vehicle 101, the cloud device 102 parses the tag change request to obtain location information of the target location. The cloud device 102 obtains traffic information of the target location according to the location information of the target location, and determines a third mark of the target location according to the traffic information of the target location.
In one possible implementation, the cloud device 102 stores traffic information of each location, location information of each location, and a correspondence between traffic information of each location. After determining the position information of the target position, the cloud device 102 obtains the traffic information of the target position according to the position information of the target position, the position information of each position, and the corresponding relation between the traffic information of each position. The traffic information of the target location includes a takeover state of the reference vehicle at the target location, the reference vehicle being a vehicle other than the autonomous vehicle.
In one possible implementation, the determining the third marker of the target location according to the traffic information of the target location includes: acquiring a running state of the reference vehicle after the reference vehicle is taken over based on the taking over condition of the reference vehicle at the target position as taken over; acquiring an image of the target position based on the detour of the running state of the reference vehicle after being taken over; and identifying the image of the target position to obtain a third mark of the target position.
Wherein, based on the traveling state after the reference vehicle is taken over as detour, the process of acquiring the image of the target position includes: based on the detour of the running state after the reference vehicle is taken over, an image acquisition request is generated, the image acquisition request including position information of the target position, the image acquisition request being for acquiring an image of the target position. And sending an image acquisition request to the cloud control center, and receiving an image of the target position returned by the cloud control center. After the image of the target position acquired by the automated driving vehicle 101, the image of the target position is uploaded to the cloud control center, and the image of the target position is stored by the cloud control center.
In one possible implementation, the process of identifying the image of the target location to obtain the third marker of the target location includes: and identifying the image of the target position to obtain the fact that no object exists at the target position, and taking the reference mark as a third mark of the target position, wherein the reference mark is used for indicating that no object exists at the target position. Or, identifying the image of the target position to obtain a second object positioned at the target position; determining the matching degree of the second object and each candidate mark; and taking the candidate mark with the matching degree meeting the second matching requirement as a third mark of the target position. The candidate labels with the matching degree meeting the second matching requirement are candidate labels with the highest matching degree.
Optionally, a deep learning model is called to identify the image of the target position, and a third mark of the target position is obtained. The deep learning model is any model, and the embodiment of the application does not limit the type of the deep learning model. Illustratively, the deep learning model is a residual network (ResNet) model.
It should be noted that, the process of determining the matching degree between the second object and each candidate mark is similar to the process of determining the matching degree between the first object and each candidate mark described above, and will not be described in detail herein.
Optionally, the cloud device 102 is further configured to directly use the reference mark as the third mark of the target position without acquiring an image of the target position if the taking-over condition of the reference vehicle at the target position is not taken over, or if the taking-over condition of the reference vehicle at the target position is taken over and the running state of the reference vehicle after being taken over is straight.
In one possible implementation, the process of determining the third marker of the target position may also be: after receiving the mark changing request sent by the automatic driving vehicle, analyzing the mark changing request to obtain the position information of the target position. And acquiring an image of the target position according to the position information of the target position, and identifying the image of the target position to obtain a third mark of the target position. The process of acquiring the image of the target position according to the position information of the target position, and the process of identifying the image of the target position to obtain the third mark of the target position are described above, and will not be described herein.
In one possible implementation manner, after the cloud device 102 receives the tag change request sent by the autopilot vehicle 101, the tag change request is parsed to obtain the location information of the target location, the cloud device 102 displays the location information of the target location, determines the third tag of the target location manually based on the target location, the weather condition of the target location, and the traffic condition of the target location, and inputs the third tag of the target location manually in the cloud device 102, so that the cloud device 102 obtains the third tag of the target location.
It should be noted that, the computing power of the cloud device 102 is better than that of the autonomous vehicle 102, and thus, the accuracy of the cloud device 102 to identify the image of the target position is higher than that of the autonomous vehicle 101 to identify the image of the target position.
In one possible implementation, after the cloud device 102 determines the third mark of the target position, the cloud device 102 is further configured to determine that the driving route includes the target position and the candidate vehicle does not pass through the target position; and sending a second change instruction to the candidate vehicle, wherein the second change instruction comprises the position information of the target position and a third mark, and the second change instruction is used for changing the mark of the target position into the third mark by the candidate vehicle.
Optionally, the cloud device 102 is further configured to send a third flag to the autonomous vehicle 101.
Optionally, the autonomous vehicle 101 is further configured to receive a third mark sent by the cloud device 102, and change the mark of the target position according to the third mark.
In one possible implementation, the process of modifying the marker of the target location according to the third marker includes: the autonomous vehicle 101 changes the mark of the target position from the second mark to the third mark based on the difference between the third mark and the second mark. Based on the third marker being the same as the second marker, no modification of the marker of the target location is required. Because the marks of each position are not stored in the cloud device 102, the cloud device 102 does not know whether the mark of the target position needs to be changed, and therefore, after the autopilot vehicle 101 receives the third mark sent by the cloud device 102, it needs to determine whether the third mark is the same as the second mark, and only when the third mark is different from the second mark, the mark of the target position needs to be changed.
Illustratively, the second flag of the target position is 0 (0 indicates that the target position exists in the construction area), the autonomous vehicle 101 determines that the first flag of the target position is 1 (1 indicates that the target position does not exist in the construction area) according to the sensing data detected in real time, and the autonomous vehicle 101 sends a flag change request to the cloud device 102 due to the difference between the first flag and the second flag. The third flag sent by the cloud device 102 is received by the autonomous vehicle 101 as 1, and the autonomous vehicle 101 changes the flag of the target position from 0 to 1 because the third flag and the second flag are different.
For another example, the second flag of the target position is 0 (0 indicates that the target position is a passable blind area), the autonomous vehicle 101 determines that the first flag of the target position is 1 (1 indicates that the target position is a passable blind area) according to the sensing data detected in real time, and the autonomous vehicle 101 sends a flag change request to the cloud device 102 because the first flag and the second flag are different. The autonomous vehicle 101 receives a third flag of 0 sent by the cloud device 102. Since the second mark is identical to the third mark, no modification of the mark of the target position is required.
For another example, the second flag of the target position is 2 (2 indicates that the target position does not have a lever), the autonomous vehicle 101 determines that the first flag of the target position is 3 (3 indicates that the target position has a lever) based on the sensing data detected in real time, and the autonomous vehicle 101 sends a flag change request to the cloud device 102 because the first flag and the second flag are different. The autonomous vehicle 101 receives a third flag of 4 (4 indicating that a target location exists tree) sent by the cloud device 102. Since the third mark of the target position is different from the second mark of the target position, the autonomous vehicle 101 changes the mark of the target position from 2 to 4.
In one possible implementation, the tag change request further includes a first tag, and the tag change request is used to indicate that the tag of the target location is changed to the first tag. After receiving the tag change request sent by the automatic driving vehicle 101, the cloud device 102 analyzes the tag change request to obtain the position information and the first tag of the target position, and the cloud device 102 determines the third tag of the target position according to the position information of the target position. The cloud device 102 is further configured to send a first flag change result to the autonomous vehicle 101 based on the third flag being the same as the first flag, where the first flag change result is used to indicate that the flag of the target position is changed to the first flag. The automatic driving vehicle 101 is further configured to receive a first sign change result sent by the cloud device 101, and change the sign of the target position from the second sign to the first sign.
In one possible implementation, the cloud device 102 is further configured to send a second flag change result to the autonomous vehicle 101 based on the third flag being different from the first flag, where the second flag change result includes the third flag, and the second flag change result is used to indicate that the flag of the target position is changed to the third flag. The automatic driving vehicle 101 is further configured to receive a second tag modification result sent by the cloud device 102, and modify the tag of the target position according to the third tag.
Optionally, the process of sending the first flag change result or the second flag change result to the autopilot vehicle 101 by the cloud device 102 includes: the cloud device 102 displays a judgment page, and the judgment page displays a first control and a second control, wherein the first control is used for indicating that the mark of the target position is changed to a first mark determined by the automatic driving vehicle 101, and the second control is used for indicating that the mark of the target position is changed to a third mark determined by the cloud device 102. According to the triggering operation of the first control in the judgment page by the person, the cloud device 102 sends a first mark changing result to the automatic driving vehicle 101. According to the triggering operation of the second control in the judgment page, the cloud device 102 sends a second mark changing result to the automatic driving vehicle 101. The triggering operation for the first control is a clicking operation of the pointer on the first control, or an operation of selecting the first control in other manners, which is not limited in the embodiment of the present application. The trigger operation for the second control is similar to the trigger operation for the first control.
In one possible implementation, after the change of the mark of the target position according to the third mark, the automatic driving vehicle 101 is further configured to control the automatic driving vehicle to pass through the target position based on that the traffic type corresponding to the mark after the change of the target position is passable; and adjusting the running route of the automatic driving vehicle based on the pass type corresponding to the mark after the change of the target position is non-passable, so as to obtain the adjusted running route, and controlling the automatic driving vehicle to run according to the adjusted running route, wherein the adjusted running route does not pass through the target position.
In one possible implementation manner, after the autopilot vehicle 101 receives the third mark sent by the cloud device 102, the autopilot vehicle 101 is further configured to determine the information transmission distance according to the location information of the target location; and determining a first road section according to the position information and the information transmission distance of the target position, wherein the cut-off position of the first road section is the target position, and sending a first change instruction to a target vehicle positioned on the first road section, wherein the first change instruction comprises the position information of the target position and a third mark, and the first change instruction is used for indicating the target vehicle to change the mark of the target position into the third mark.
Wherein, according to the position information of the target position, confirm the process of the information transmission distance includes: determining a target road where the target position is located according to the position information of the target position; determining a limiting speed of a target road; determining the reaction distance of equipment running on the target road according to the limiting speed of the target road; determining a braking distance of equipment running on the target road according to the limiting speed of the target road and the road surface condition of the target road; and determining the information transmission distance according to the reaction distance and the braking distance.
Alternatively, to raise the safety factor, a limit speed is set for each road, which may be a speed threshold value beyond which the maximum speed of a vehicle traveling on the road must not exceed. The corresponding relationship between the road identifier of each road and the limiting speed is stored in the autonomous vehicle 101, and after the target road where the target position is located is determined, the limiting speed of the target road is determined according to the road identifier of the target road and the corresponding relationship between the road identifier of each road and the limiting speed.
In one possible implementation, the reaction distance S of the vehicle traveling on the target road is determined according to the following equation (2) based on the limit speed of the target road 1
S 1 =V*T (2)
In the above formula (2), V is the limiting speed of the target road, T is the reaction time in seconds.
In one possible implementation, the reaction time is a time set by the user based on experience, such as a reaction time of 1.5 seconds. Of course, the reaction time may be longer or shorter, which is not limited in the examples herein.
When calculating the reaction distance, it is necessary to convert the limiting speed and the reaction time into the same unit, for example, the limiting speed is in km/h and the reaction time is in hours; for another example, the rate limit is in meters per second and the reaction time is in seconds.
For example, taking the limiting speed of the target road as 80 km/h and the reaction time as 1.5 seconds as an example, the reaction time is converted into a unit consistent with the limiting speed, 1.5 seconds is 0.004 hours, and the reaction distance S is determined according to the above formula (2) 1 =v×t=80×0.004=0.32 km.
In one possible implementation, the determining the braking distance of the vehicle traveling on the target road according to the limiting speed of the target road and the road surface condition of the target road includes: determining the friction coefficient of the target road according to the road surface condition of the target road; and determining the braking distance of the vehicle running on the target road according to the limiting speed of the target road and the friction coefficient of the target road. The autonomous vehicle 101 stores a correspondence between road surface conditions and friction coefficient ranges corresponding to the road surface conditions, and determines a friction coefficient corresponding to the target road based on the road surface conditions of the target road and the correspondence between the road surface conditions and the friction coefficient ranges corresponding to the road surface conditions after determining the road surface conditions of the target road.
In one possible implementation, the braking distance S of the vehicle traveling on the target road is determined according to the following formula (3) based on the limiting speed of the target road and the friction coefficient of the target road 2
In the above formula (3), V is the limiting speed of the target road, g is the gravitational acceleration, and μ is the friction coefficient of the target road. The gravity acceleration is 9.8 meters per square second, but may be other values, which is not limited in this embodiment.
Before calculating the braking distance, the unit of the gravitational acceleration and the unit of the limiting speed need to be adjusted to be consistent, for example, the unit of the gravitational acceleration is meter/square second, and the unit of the limiting speed is meter/second; for another example, when the unit of the gravitational acceleration is km/square, the unit of the limiting speed is km/hour.
For example, the limiting speed of the target road is 80 km/h, the friction coefficient of the target road is 0.35, the gravitational acceleration is 9.8 km/square second, and the unit of gravitational acceleration is converted into the same unit as the limiting speed, namely, when 9.8 km/square second is equal to 127008 km/square. Determining a braking distance of a vehicle traveling on a target road according to the above formula (3)
Optionally, the process of determining the information transmission distance according to the reaction distance and the braking distance includes: and taking the sum of the reaction distance and the braking distance as the information transmission distance. Or determining a first weight parameter corresponding to the reaction distance, a second weight parameter corresponding to the braking distance, determining a first value according to the reaction distance and the first weight parameter, determining a second value according to the braking distance and the second weight parameter, and taking the sum of the first value and the second value as the information transmission distance. The first weight parameter and the second weight parameter are set based on experience, or are adjusted according to the implementation environment, which is not limited in the embodiment of the present application.
The embodiment of the application provides an automatic driving system, which changes the mark of a target position in the running process of an automatic driving vehicle 101, so that the change of the mark of the target position is more timely; moreover, the system enables the cloud device 102 to determine the third mark of the target position through interaction between the automatic driving vehicle 101 and the cloud device 102, so that the accuracy of the determined third mark is higher, the accuracy of changing the position mark can be further improved, and the running safety of the automatic driving vehicle is improved.
Fig. 2 is a schematic diagram of an autonomous vehicle, which is provided in an embodiment of the present application and is in communication connection with a cloud device, the autonomous vehicle includes a transceiver 201 and a processor 202. Wherein,
a processor 202 for determining a first marker of the target position based on the sensed data detected in real time during the driving, the first marker being used for indicating the presence of the object at the target position at the current moment;
a transceiver 201, configured to send a tag change request to the cloud device based on a second tag of a target location stored in the autonomous vehicle being different from the first tag, where the tag change request includes at least location information of the target location, and the second tag is used to indicate an existence of an object at the target location at a historical time;
The transceiver 201 is further configured to receive a third flag sent by the cloud device, where the third flag is a flag determined by the cloud device according to the location information of the target location, and the third flag is used to indicate that the object at the target location exists at the current moment;
the processor 202 is further configured to change the mark of the target position according to the third mark.
Fig. 3 is a schematic diagram of a cloud device provided in an embodiment of the present application, where the cloud device is in communication connection with an autonomous vehicle, and the cloud device includes a transceiver 301 and a processor 302. Wherein,
a transceiver 301, configured to receive a tag change request sent by an autonomous vehicle, where the tag change request includes at least location information of a target location;
a processor 302, configured to determine a third mark of the target position according to the position information of the target position, where the third mark is used to indicate the existence of the object at the target position at the current moment;
the transceiver 301 is further configured to send a third flag to the autonomous vehicle, the third flag being used by the autonomous vehicle to alter the flag of the target location.
It should be understood that, in the above-provided device, when implementing the functions thereof, only the division of the above-mentioned functional modules is illustrated, and in practical application, the above-mentioned functional allocation may be implemented by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above.
Fig. 4 is a schematic diagram of an autopilot system provided in an embodiment of the present application, where the system includes a cloud device and an autopilot vehicle, where the cloud device includes an event collection platform, a region adaptation server, and a measure issuing platform.
Optionally, when the autonomous vehicle detects that the first marker of the target location and the second marker of the target location are different, a marker change request is sent to the event collection platform. After the event collection platform receives the mark change request, the mark change request is sent to the area adaptation server, and the area adaptation server determines a third mark of the target position. After the area adapting server determines the third mark, the third mark is sent to the measure issuing platform. The measure issuing platform sends a third mark to the automatic driving vehicle, and the automatic driving vehicle changes the mark of the target position according to the third mark.
Fig. 5 shows a block diagram of a computer device according to an exemplary embodiment of the present application. The computer device may be an autonomous vehicle or a cloud device.
In general, the computer device 500 includes: a processor 501 and a memory 502.
Processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 501 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 501 may also include a main processor and a coprocessor, the main processor being a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 501 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of content that the display screen is required to display. In some embodiments, the processor 501 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, the non-transitory computer readable storage medium in memory 502 is configured to store at least one instruction for execution by processor 501 to implement what is performed by the autonomous vehicle shown in fig. 2 of the present application and/or to implement what is performed by the cloud device shown in fig. 3 of the present application.
In some embodiments, the computer device 500 may further optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502, and peripheral interface 503 may be connected by buses or signal lines. The individual peripheral devices may be connected to the peripheral device interface 503 by buses, signal lines or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 504, a display 505, a camera assembly 506, audio circuitry 507, a positioning assembly 508, and a power supply 509.
Peripheral interface 503 may be used to connect at least one Input/Output (I/O) related peripheral to processor 501 and memory 502. In some embodiments, processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 501, memory 502, and peripheral interface 503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 504 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 504 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 504 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 504 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 504 may communicate with other computer devices via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 504 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The display 505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 505 is a touch display, the display 505 also has the ability to collect touch signals at or above the surface of the display 505. The touch signal may be input as a control signal to the processor 501 for processing. At this time, the display 505 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 505 may be one, disposed on the front panel of the computer device 500; in other embodiments, the display 505 may be at least two, respectively disposed on different surfaces of the computer device 500 or in a folded design; in other embodiments, the display 505 may be a flexible display disposed on a curved surface or a folded surface of the computer device 500. Even more, the display 505 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display 505 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 506 is used to capture images or video. Optionally, the camera assembly 506 includes a front camera and a rear camera. Typically, a front camera is disposed on the front panel of the computer device 500 and a rear camera is disposed on the back of the computer device 500. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 506 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 507 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing, or inputting the electric signals to the radio frequency circuit 504 for voice communication. The microphone may be provided in a plurality of different locations of the computer device 500 for stereo acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 501 or the radio frequency circuit 504 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuitry 507 may also include a headphone jack.
The location component 508 is used to locate the current geographic location of the computer device 500 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 508 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, the Granati system of Russia, or the Galileo system of the European Union.
The power supply 509 is used to power the various components in the computer device 500. The power supply 509 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 509 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the computer device 500 further includes one or more sensors 510. The one or more sensors 510 include, but are not limited to: an acceleration sensor 511, a gyro sensor 512, a pressure sensor 513, a fingerprint sensor 514, an optical sensor 515, and a proximity sensor 516.
The acceleration sensor 511 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the computer device 500. For example, the acceleration sensor 511 may be used to detect components of gravitational acceleration on three coordinate axes. The processor 501 may control the display 505 to display a user interface in a landscape view or a portrait view according to a gravitational acceleration signal acquired by the acceleration sensor 511. The acceleration sensor 511 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 512 may detect a body direction and a rotation angle of the computer device 500, and the gyro sensor 512 may collect a 3D motion of the user on the computer device 500 in cooperation with the acceleration sensor 511. The processor 501 may implement the following functions based on the data collected by the gyro sensor 512: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 513 may be disposed on a side frame of the computer device 500 and/or on an underlying layer of the display 505. When the pressure sensor 513 is disposed on the side frame of the computer device 500, a grip signal of the computer device 500 by a user may be detected, and the processor 501 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 513. When the pressure sensor 513 is disposed at the lower layer of the display screen 505, the processor 501 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 505. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 514 is used for collecting the fingerprint of the user, and the processor 501 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 514, or the fingerprint sensor 514 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 501 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 514 may be disposed on the front, back, or side of the computer device 500. When a physical key or vendor Logo is provided on the computer device 500, the fingerprint sensor 514 may be integrated with the physical key or vendor Logo.
The optical sensor 515 is used to collect the ambient light intensity. In one embodiment, the processor 501 may control the display brightness of the display screen 505 based on the intensity of ambient light collected by the optical sensor 515. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 505 is turned up; when the ambient light intensity is low, the display brightness of the display screen 505 is turned down. In another embodiment, the processor 501 may also dynamically adjust the shooting parameters of the camera assembly 506 based on the ambient light intensity collected by the optical sensor 515.
A proximity sensor 516, also referred to as a distance sensor, is typically provided on the front panel of the computer device 500. The proximity sensor 516 is used to collect the distance between the user and the front of the computer device 500. In one embodiment, when the proximity sensor 516 detects a gradual decrease in the distance between the user and the front of the computer device 500, the processor 501 controls the display 505 to switch from the bright screen state to the off screen state; when the proximity sensor 516 detects that the distance between the user and the front of the computer device 500 gradually increases, the display 505 is controlled by the processor 501 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is not limiting as to the computer device 500, and may include more or fewer components than shown, or may combine certain components, or employ a different arrangement of components.
Fig. 6 is a schematic structural diagram of a computer device provided in the embodiment of the present application, where the computer device may be an autopilot device or a cloud device, and the computer device 600 may have a relatively large difference due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU) 601 and one or more memories 602, where at least one program code is stored in the one or more memories 602, and the at least one program code is loaded and executed by the one or more processors 601 to implement what is performed by the autopilot vehicle shown in fig. 2 and/or implement what is performed by the cloud device shown in fig. 3. Of course, the computer device 600 may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the perceptual data referred to in this application are all acquired with sufficient authorization.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The foregoing description of the exemplary embodiments of the present application is not intended to limit the invention to the particular embodiments of the present application, but to limit the scope of the invention to any modification, equivalents, or improvements made within the principles of the present application.

Claims (14)

1. An autopilot system, wherein the system comprises an autopilot vehicle and a cloud device, wherein the autopilot vehicle and the cloud device are in communication connection;
the automatic driving vehicle is used for determining a first mark of a target position based on sensing data detected in real time in the driving process, and the first mark is used for indicating the existence condition of an object at the target position at the current moment;
the autonomous vehicle is further configured to send a tag change request to the cloud device based on a second tag of the target location stored in the autonomous vehicle being different from the first tag, the tag change request including at least location information of the target location, the second tag being used to indicate an object presence of the target location at a historical time;
the cloud device is used for receiving a mark changing request sent by the automatic driving vehicle, determining a third mark of the target position according to the position information of the target position, and the third mark is used for indicating the existence condition of an object at the target position at the current moment;
the cloud device is further configured to send the third tag to the autonomous vehicle;
The automatic driving vehicle is further used for receiving a third mark sent by the cloud device, and changing the mark of the target position according to the third mark.
2. The system of claim 1, wherein the autonomous vehicle is configured to change the signature of the target location from the second signature to the third signature based on the third signature being different from the second signature.
3. The system according to claim 1 or 2, wherein the autonomous vehicle is configured to determine that no object is present at the target location based on the sensed data detected in real time during the driving, and to use a target mark as a first mark of the target location;
or determining a first object located at the target position based on the sensing data detected in real time during the driving process; determining the matching degree of the first object and each candidate mark; and taking the candidate mark with the matching degree meeting the first matching requirement as a first mark of the target position.
4. The system of claim 1 or 2, wherein the autonomous vehicle is further configured to control the autonomous vehicle to pass through the target location based on the type of pass corresponding to the marker after the target location change being passable;
Or based on that the traffic type corresponding to the mark after the change of the target position is non-traffic, adjusting the driving route of the automatic driving vehicle to obtain an adjusted driving route, and controlling the automatic driving vehicle to drive according to the adjusted driving route, wherein the adjusted driving route does not pass through the target position.
5. The system according to claim 1 or 2, wherein the autonomous vehicle is further configured to determine an information transmission distance based on the position information of the target position; determining a first road section according to the position information of the target position and the information transmission distance, wherein the cut-off position of the first road section is the target position; and sending a first change instruction to a target vehicle positioned on the first road section, wherein the first change instruction comprises position information of the target position and the third mark, and the first change instruction is used for instructing the target vehicle to change the mark of the target position into the third mark.
6. The system according to claim 1 or 2, wherein the cloud device is configured to obtain an image of the target location according to the location information of the target location, and identify the image of the target location to obtain a third mark of the target location;
Or, according to the position information of the target position, acquiring traffic information of the target position; and determining a third mark of the target position according to the traffic information of the target position.
7. The system of claim 6, wherein the traffic information for the target location includes a takeover condition of a reference vehicle at the target location, the reference vehicle being a vehicle other than the autonomous vehicle;
the cloud device is used for acquiring a running state of the reference vehicle after the reference vehicle is taken over based on the taking over condition of the reference vehicle at the target position as taken over; acquiring an image of the target position based on the travel state of the reference vehicle after being taken over as detour; and identifying the image of the target position to obtain a third mark of the target position.
8. The system of claim 7, wherein the cloud device is configured to identify an image of the target location, obtain that no object exists at the target location, and use a reference mark as a third mark of the target location;
or, identifying the image of the target position to obtain a second object positioned at the target position; determining the matching degree of the second object and each candidate mark; and taking the candidate mark with the matching degree meeting the second matching requirement as a third mark of the target position.
9. The system of claim 8, wherein the cloud device is further configured to take the reference marker as a third marker of the target location based on whether the taking-over condition of the reference vehicle at the target location is not taken over or whether the taking-over condition of the reference vehicle at the target location is taken over and the running state of the reference vehicle after taken over is straight.
10. The system of claim 1, wherein the cloud device is further configured to determine that a travel route includes the target location and a candidate vehicle that does not pass the target location; and sending a second change instruction to the candidate vehicle, wherein the second change instruction comprises the position information of the target position and the third mark, and the second change instruction is used for changing the mark of the target position to the third mark by the candidate vehicle.
11. The system of claim 1, wherein the tag change request further includes the first tag, the tag change request indicating to change the tag of the target location to the first tag;
the cloud device is further configured to send a first flag change result to the autonomous vehicle based on the third flag being the same as the first flag, the first flag change result being configured to indicate that the flag of the target position is changed to the first flag;
The automatic driving vehicle is further used for receiving a first mark changing result sent by the cloud device and changing the mark of the target position from the second mark to the first mark.
12. The system of claim 11, wherein the cloud device is further configured to send a second flag change result to the autonomous vehicle based on the third flag being different from the first flag, the second flag change result including the third flag, the second flag change result being used to indicate that the flag of the target location is changed to the third flag;
the automatic driving vehicle is further used for receiving a mark changing result sent by the cloud device, and changing the mark of the target position according to the third mark.
13. An autonomous vehicle, wherein the autonomous vehicle is communicatively coupled to a cloud device, the autonomous vehicle comprising a transceiver and a processor;
the processor is used for determining a first mark of a target position based on sensing data detected in real time in a driving process, and the first mark is used for indicating the existence condition of an object at the target position at the current moment;
The transceiver is configured to send a tag change request to the cloud device based on a second tag of the target location stored in the autonomous vehicle being different from the first tag, where the tag change request includes at least location information of the target location, and the second tag is used to indicate an object presence condition of the target location at a historical time;
the transceiver is further configured to receive a third flag sent by the cloud device, where the third flag is a flag determined by the cloud device according to the location information of the target location, and the third flag is used to indicate that an object at the target location exists at the current time;
the processor is further configured to change the mark of the target position according to the third mark.
14. The cloud device is characterized by being in communication connection with an automatic driving vehicle and comprises a transceiver and a processor;
the transceiver is used for receiving a mark changing request sent by the automatic driving vehicle, and the mark changing request at least comprises the position information of a target position;
the processor is used for determining a third mark of the target position according to the position information of the target position, and the third mark is used for indicating the existence condition of the object at the target position at the current moment;
The transceiver is further configured to send the third tag to the autonomous vehicle, where the third tag is used by the autonomous vehicle to alter the tag of the target location.
CN202211120020.3A 2022-09-14 2022-09-14 Autopilot system, autopilot vehicle and cloud device Pending CN117734723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211120020.3A CN117734723A (en) 2022-09-14 2022-09-14 Autopilot system, autopilot vehicle and cloud device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211120020.3A CN117734723A (en) 2022-09-14 2022-09-14 Autopilot system, autopilot vehicle and cloud device

Publications (1)

Publication Number Publication Date
CN117734723A true CN117734723A (en) 2024-03-22

Family

ID=90259738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211120020.3A Pending CN117734723A (en) 2022-09-14 2022-09-14 Autopilot system, autopilot vehicle and cloud device

Country Status (1)

Country Link
CN (1) CN117734723A (en)

Similar Documents

Publication Publication Date Title
CN110967011B (en) Positioning method, device, equipment and storage medium
CN110979318B (en) Lane information acquisition method and device, automatic driving vehicle and storage medium
CN110095128B (en) Method, device, equipment and storage medium for acquiring missing road information
CN111400610B (en) Vehicle-mounted social method and device and computer storage medium
CN111854780B (en) Vehicle navigation method, device, vehicle, electronic equipment and storage medium
CN112802369B (en) Method and device for acquiring flight route, computer equipment and readable storage medium
CN112991439B (en) Method, device, electronic equipment and medium for positioning target object
CN112269939B (en) Automatic driving scene searching method, device, terminal, server and medium
CN111754564B (en) Video display method, device, equipment and storage medium
CN111444749B (en) Method and device for identifying road surface guide mark and storage medium
CN112734346B (en) Method, device and equipment for determining lane coverage and readable storage medium
CN112817337B (en) Method, device and equipment for acquiring path and readable storage medium
CN111717205B (en) Vehicle control method, device, electronic equipment and computer readable storage medium
CN114789734A (en) Perception information compensation method, device, vehicle, storage medium, and program
CN117734723A (en) Autopilot system, autopilot vehicle and cloud device
CN113255906A (en) Method, device, terminal and storage medium for returning obstacle 3D angle information in automatic driving
CN114623836A (en) Vehicle pose determining method and device and vehicle
CN112241662B (en) Method and device for detecting drivable area
CN115092254B (en) Method, device, equipment and storage medium for acquiring steering angle of wheel
CN115180018B (en) Method, device, equipment and storage medium for measuring steering wheel rotation angle
CN113734199B (en) Vehicle control method, device, terminal and storage medium
CN117372320A (en) Quality detection method, device and equipment for positioning map and readable storage medium
WO2024087456A1 (en) Determination of orientation information and autonomous vehicle
CN116338626A (en) Point cloud data denoising method, device, equipment and computer readable storage medium
CN116331196A (en) Automatic driving automobile data security interaction system, method, terminal and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination