CN114802311B - Global vehicle control method and device, electronic equipment and storage medium - Google Patents

Global vehicle control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114802311B
CN114802311B CN202210738441.6A CN202210738441A CN114802311B CN 114802311 B CN114802311 B CN 114802311B CN 202210738441 A CN202210738441 A CN 202210738441A CN 114802311 B CN114802311 B CN 114802311B
Authority
CN
China
Prior art keywords
target
vehicle
data
environment data
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210738441.6A
Other languages
Chinese (zh)
Other versions
CN114802311A (en
Inventor
尚进
於大维
杨小枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoqi Intelligent Control Beijing Technology Co Ltd
Original Assignee
Guoqi Intelligent Control Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoqi Intelligent Control Beijing Technology Co Ltd filed Critical Guoqi Intelligent Control Beijing Technology Co Ltd
Priority to CN202210738441.6A priority Critical patent/CN114802311B/en
Publication of CN114802311A publication Critical patent/CN114802311A/en
Application granted granted Critical
Publication of CN114802311B publication Critical patent/CN114802311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a global vehicle control method, a global vehicle control device, an electronic device and a storage medium, wherein first request information sent by a target vehicle is received, and the first request information is used for indicating a digital twin corresponding to the target vehicle to run a target driving application; acquiring corresponding target environment data according to the first request information, wherein the target environment data are uploaded by the registration equipment in the target area and represent perception information of the registration equipment to the environment where the registration equipment is located; and generating a target control instruction according to the target environment data, wherein the target control instruction is used for controlling the target vehicle to run in the target area so as to realize the automatic driving function corresponding to the target driving application. Because the target environment data representing the perception information uploaded by other registered devices in the target area are obtained, the target control instruction generated based on the target environment data can accurately realize the automatic driving function corresponding to the target driving application, and the driving safety and reliability of the target vehicle are improved.

Description

Global vehicle control method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a global vehicle control method and apparatus, an electronic device, and a storage medium.
Background
At present, the automatic driving function of the intelligent vehicle usually judges roads and obstacles around the vehicle based on the environmental perception capability of the vehicle end, so as to control the driving direction and the driving speed of the intelligent vehicle.
However, since the sensing ability of a single vehicle is limited, the requirement of the high-level automatic driving function cannot be met only by the environment sensing ability of the vehicle end, and the stability and reliability of the automatic driving function of the vehicle are affected.
Disclosure of Invention
The application provides a global vehicle control method, a global vehicle control device, electronic equipment and a storage medium, which are used for solving the problems that the perception capability of a single vehicle is limited and the stability and the reliability of the automatic driving function of the vehicle are influenced.
In a first aspect, the present application provides a global vehicle control method, including:
receiving first request information sent by a target vehicle, wherein the first request information is used for indicating a digital twin corresponding to the target vehicle to run a target driving application; acquiring corresponding target environment data according to the first request information, wherein the target environment data are uploaded by a registered device in a target area and represent perception information of the registered device to the environment where the registered device is located; and generating a target control instruction according to the target environment data, wherein the target control instruction is used for controlling the target vehicle to run in the target area so as to realize an automatic driving function corresponding to the target driving application.
In a possible implementation manner, the first request information includes an application identifier representing the target driving application; acquiring corresponding target environment data according to the first request information, wherein the acquisition comprises the following steps:
acquiring positioning information of the target vehicle; determining the target area according to the positioning information of the target vehicle and the application identifier of the target driving application; and acquiring corresponding target environment data according to the target area.
In a possible implementation manner, the obtaining, according to the target area, corresponding target environment data includes: determining a corresponding target data type based on the application identification; obtaining target registration equipment corresponding to the target data type in the target area based on the target data type and the target area; and acquiring target environment data uploaded by at least one target registration device.
In a possible implementation manner, acquiring the corresponding target environment data according to the first request information includes: acquiring a preset environment data model, wherein the environment data model comprises the position information of the target vehicle and environment data uploaded by a registration device; and calling the environment data model according to the first request information to obtain target environment data corresponding to the position information of the target vehicle.
In a possible implementation manner, the target environment data includes first environment data and second environment data, where the first environment data represents road traffic flow information in the target area; the second environmental data characterizes a road environment around the target vehicle, and the target control command is generated according to the target environmental data, and includes: generating first path planning information according to the first environment data; generating second path planning information according to the first environment data on the basis of the first path planning information; and generating a target control instruction according to the second path planning information, wherein the target control instruction is used for controlling a target vehicle to automatically run along a path corresponding to the second path planning information.
In one possible implementation manner, the first environment data includes signal light data and traffic flow data uploaded by registered devices in the target area; the second environment data comprises image data and/or radar data uploaded by registered equipment within a preset range of the target vehicle.
In one possible implementation manner, the registration device includes at least one of: the system comprises an intelligent automobile, roadside equipment and a mobile terminal; the method further comprises the following steps: and sending first notification information to the registration device, wherein the first notification information represents the position of the target vehicle, or road traffic information in the target area.
In a possible implementation manner, the sending the first notification information to the registration device includes at least one of the following implementation manners: and sending the position of the target vehicle to the mobile terminal, sending road vehicle flow information in the target area to the road side equipment, and sending the position of the target vehicle and/or the road vehicle flow information in the target area to the intelligent automobile.
In one possible implementation, the method further includes: and responding to second request information sent by an external system, and sending the target environment data to the external system through a preset communication middleware so that the external system generates a three-dimensional visualized road condition map of the target area based on the target environment data.
In a second aspect, the present application provides a global vehicle control apparatus comprising:
the system comprises a transceiving module, a control module and a control module, wherein the transceiving module is used for receiving first request information sent by a target vehicle, and the first request information is used for indicating a digital twin corresponding to the target vehicle to run a target driving application;
the acquisition module is used for acquiring corresponding target environment data according to the first request information, wherein the target environment data is uploaded by the registered equipment in the target area and represents the perception information of the registered equipment to the environment where the registered equipment is located;
and the generating module is used for generating a target control instruction according to the target environment data, wherein the target control instruction is used for controlling the target vehicle to run in the target area so as to realize an automatic driving function corresponding to the target driving application.
In a possible implementation manner, the first request information includes an application identifier representing the target driving application; the acquisition module is specifically configured to: acquiring positioning information of the target vehicle; determining the target area according to the positioning information of the target vehicle and the application identifier of the target driving application; and acquiring corresponding target environment data according to the target area.
In a possible implementation manner, when the obtaining module obtains the corresponding target environment data according to the target area, the obtaining module is specifically configured to: determining a corresponding target data type based on the application identification; obtaining target registration equipment corresponding to the target data type in the target area based on the target data type and the target area; and acquiring target environment data uploaded by at least one target registration device.
In a possible implementation manner, the obtaining module is specifically configured to: acquiring a preset environment data model, wherein the environment data model comprises the position information of the target vehicle and environment data uploaded by a registration device; and calling the environment data model according to the first request information to obtain target environment data corresponding to the position information of the target vehicle.
In a possible implementation manner, the target environment data includes first environment data and second environment data, where the first environment data represents road traffic flow information in the target area; the second environmental data characterizes a road environment around the target vehicle, and the generation module is specifically configured to: generating first path planning information according to the first environment data; generating second path planning information according to the first environment data on the basis of the first path planning information; and generating a target control instruction according to the second path planning information, wherein the target control instruction is used for controlling a target vehicle to automatically run along a path corresponding to the second path planning information.
In a possible implementation manner, the first environment data includes signal light data and traffic flow data uploaded by registered devices in the target area; the second environment data comprises image data and/or radar data uploaded by registered equipment within a preset range of the target vehicle.
In one possible implementation manner, the registration device includes at least one of: the system comprises an intelligent automobile, roadside equipment and a mobile terminal; the transceiver module is further configured to: and sending first notification information to the registration device, wherein the first notification information represents the position of the target vehicle, or road traffic information in the target area.
In a possible implementation manner, when sending the first notification information to the registration device, the transceiver module is specifically configured to: and sending the position of the target vehicle to the mobile terminal, sending road traffic flow information in the target area to the road side equipment, and sending the position of the target vehicle and/or the road traffic flow information in the target area to the intelligent automobile.
In a possible implementation manner, the transceiver module is further configured to: and responding to second request information sent by an external system, sending the target environment data to the external system through a preset communication middleware, so that the external system generates a three-dimensional visual road condition map of the target area based on the target environment data.
In a third aspect, the present application provides an electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes the computer-executable instructions stored in the memory to implement the global vehicle control method according to any one of the first aspect of the embodiments of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon computer-executable instructions for implementing the method for controlling a global vehicle according to any one of the first aspect of the embodiments of the present application when the computer-executable instructions are executed by a processor.
According to a fifth aspect of embodiments herein there is provided a computer program product comprising a computer program which, when executed by a processor, implements a global vehicle control method as described in any one of the first aspects above.
According to the global vehicle control method, the global vehicle control device, the electronic equipment and the storage medium, first request information sent by a target vehicle is received, and the first request information is used for indicating a digital twin corresponding to the target vehicle to run a target driving application; acquiring corresponding target environment data according to the first request information, wherein the target environment data are uploaded by a registered device in a target area and represent perception information of the registered device to the environment where the registered device is located; and generating a target control instruction according to the target environment data, wherein the target control instruction is used for controlling the target vehicle to operate in the target area so as to realize an automatic driving function corresponding to the target driving application. Because the target environment data representing the perception information uploaded by other registered devices in the target area is obtained based on the first request information, which is equivalent to expanding the perception range of the target vehicle by using the perception capabilities of other registered devices, the target control instruction generated based on the target environment data can accurately realize the automatic driving function corresponding to the target driving application, and the driving safety and reliability of the target vehicle are improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is an application scene diagram of a global vehicle control method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a global vehicle control method provided in one embodiment of the present application;
FIG. 3 is a schematic diagram of target environment data provided by an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a specific implementation step of step S102 in the embodiment shown in FIG. 2;
FIG. 5 is a flow chart of a global vehicle control method provided in another embodiment of the present application;
fig. 6 is a schematic diagram of target areas corresponding to different applications according to an embodiment of the present disclosure;
FIG. 7 is a flowchart illustrating a specific implementation step of step S204 in the embodiment shown in FIG. 5;
FIG. 8 is a flowchart illustrating a specific implementation step of step S206 in the embodiment shown in FIG. 5
Fig. 9 is a schematic diagram of a process of obtaining and receiving first environment data and second environment data by an edge cloud server;
FIG. 10 is a schematic structural diagram of a global vehicle control device provided in an embodiment of the present application;
FIG. 11 is a schematic view of an electronic device provided by an embodiment of the present application;
fig. 12 is a block diagram of a terminal device according to an exemplary embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. The drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the concepts of the application by those skilled in the art with reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
In the technical scheme of the application, the processing of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the related user all accords with the regulation of related laws and regulations and does not violate the good custom of the public order.
The following explains an application scenario of the embodiment of the present application:
the global vehicle control method provided by the embodiment of the present application can be applied to a vehicle cloud Computing scenario, and more specifically, can be applied to a digital twin vehicle cloud Computing scenario based on Multi-access Edge Computing (MEC), for example, fig. 1 is an application scenario diagram of the global vehicle control method provided by the embodiment of the present application, as shown in fig. 1, an execution subject of the method provided by the embodiment of the present application may be an Edge cloud server, the Edge cloud server may communicate with a smart vehicle, and based on a digital technology, vehicle data (including vehicle operating state data, sensor data, and the like) of the smart vehicle is obtained in real time through a digital twin (logical single vehicle) corresponding to the vehicle in a digital network, and after Computing is performed based on the operating data, a control instruction (or control data) is sent to a corresponding smart vehicle, the intelligent automobile can run based on the control instruction (or control data), so that the corresponding automatic driving function is realized.
Currently, in an application scenario of vehicle cloud computing based on a digital twin technology, a digital twin (logical single vehicle) corresponding to an intelligent vehicle runs in a server (or a cluster formed by servers). The server acquires running state information of the intelligent automobile running on the road surface, such as position, speed and the like, and perception information acquired by the intelligent automobile through the sensor, such as radar data, image data and the like, through the digital twin. And calculation and vehicle end control are carried out based on the data, so that the intelligent vehicle realizes automatic driving functions such as automatic navigation, path planning, obstacle avoidance and the like. However, under some complex road conditions, because the number and types of sensors configured for a single vehicle are limited, only a small amount of effective sensing information can be obtained, and the small amount of effective sensing information cannot meet the requirement of a high-level automatic driving function, and even the stability and reliability of the automatic driving function are caused, which affects the driving safety of the vehicle.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a flowchart of a global vehicle control method according to an embodiment of the present application, and as shown in fig. 2, the global vehicle control method according to the embodiment includes the following steps:
and step S101, receiving first request information sent by the target vehicle, wherein the first request information is used for instructing a digital twin corresponding to the target vehicle to run a target driving application.
For example, the execution subject of the method provided by this embodiment may be an edge cloud server, or a control unit in a cluster formed by a plurality of distributed edge cloud servers. And the method provided by the embodiment is realized through a specific service. More specifically, a core value-added service of vehicle cloud cooperative computing running on a 5G MEC edge cloud operator platform, which is realized based on cloud framework development, is formed together with a vehicle cloud computing logic single vehicle OS digital twin.
Further, the edge cloud server is in communication connection with the target vehicle, for example, when the target vehicle needs to execute an automatic driving function based on a user instruction or other trigger conditions, first request information may be sent to the edge cloud server, and after receiving the first request information, the edge cloud server may execute a corresponding target driving application, for example, an application (program) for automatic navigation, based on a digital twin body corresponding to the target vehicle in the edge cloud server, so that the target vehicle can implement the corresponding automatic navigation function based on a control instruction sent by the edge cloud server in subsequent steps.
The target vehicle and the edge cloud server may communicate with each other through a 5G technology to achieve low-latency communication connection, where the automatic driving function generally refers to functions related to automatic driving of each level, including, for example, automatic path planning, adaptive cruise, lane keeping, and the like, without specific limitations.
And step S102, acquiring corresponding target environment data according to the first request information, wherein the target environment data is uploaded by the registered equipment in the target area and represents the perception information of the registered equipment to the environment where the registered equipment is located.
Further, after receiving the first request information, the edge cloud server includes an application identifier indicating a target application, and based on the application identifier, the edge cloud server may determine the target application requested by the target vehicle. And then, the edge cloud server locally acquires target environment data required by executing the target application, wherein the target environment data refers to data uploaded by the registered equipment in the target area and is used for representing perception information of the environment where the registered equipment is located. Specifically, for example, if the target vehicle is currently located in city a, the corresponding target area may be city a. Of course, the target area may also be a subordinate area of city a, such as administrative district a1 of city a; or an upper region of city a, for example, province a2 where city a is located, country A3, and the like.
Further, the registered device in the target area, that is, the registered device is registered on the system platform corresponding to the edge cloud server, and the device capable of performing authorized communication with the edge cloud server, for example, the registered device may be a smart car, a road side device, a mobile terminal, or the like. The registration equipment has the capability of environmental perception as electronic equipment, and can obtain corresponding perception information describing environmental characteristics from the environment of the position where the registration equipment is located. Specifically, for example, the registration device is an intelligent automobile, and a camera and a radar for shooting images are arranged on the intelligent automobile, so that the corresponding environment data uploaded by the intelligent automobile are image data and radar data. For another example, the registration device is a mobile terminal, and more specifically, the registration device is a smart phone, and the smart phone may be similar to a smart car, and upload an image captured by a camera to the edge cloud server as environment data, or upload only current location information of the smart phone to the edge cloud server as environment data. The specific implementation manner of the environment data uploaded by the registration device may be set according to specific needs, and is not described herein again.
Fig. 3 is a schematic diagram of target environment data provided in an embodiment of the present disclosure, and as shown in fig. 3, after a target vehicle sends first request information, an edge cloud server obtains the target environment data uploaded by a plurality of vehicles (shown as a vehicle a, a vehicle B, and a vehicle C in the figure) and a roadside device (shown as a roadside device D in the figure) in a target area, where the target environment data includes image data acquired by a camera. More specifically, each vehicle in the target area corresponds to one sensing area, and as shown in the figure, the vehicle a corresponds to a sensing range a, the vehicle B corresponds to a sensing range B, the vehicle C corresponds to a sensing range C, and the roadside device D corresponds to a sensing range D. After the edge cloud server obtains the target environment data uploaded by each vehicle in the target area, based on the combination of the target environment data, sensing data (a union set of a sensing range a, a sensing range b, a sensing range c and a sensing range d) in a larger range is obtained, so that the sensing capability of the target vehicle is equivalently improved.
In one possible implementation manner, as shown in fig. 4, the specific implementation steps of step S102 include:
and step S1021, acquiring a preset environment data model, wherein the environment data model comprises the position information of the target vehicle and the environment data uploaded by the registration equipment.
Step S1022, according to the first request information, the environment data model is called to obtain target environment data corresponding to the position information of the target vehicle.
Illustratively, the environment data model is a service preset in the edge cloud server, the target vehicle uploads the position information of the target vehicle to the environment data model in real time, and meanwhile, environment data such as position information uploaded by other registered devices are stored in the environment data model. By calling an interface (API) provided by the environment data model, corresponding environment data can be obtained. Specifically, the first request information includes an application identifier corresponding to the target application, and an identifier of data required to be used is determined according to the application identifier, or the first request information directly includes an identifier of data required to be used by the target application; after obtaining the first request information, the edge cloud server obtains an identifier of data required to be used by the target application by analyzing the first request information, and then calls an API (application programming interface) of a corresponding environment data model based on the identifier of the data, so that the target environment data is obtained.
And step S103, generating a target control instruction according to the target environment data, wherein the target control instruction is used for controlling the target vehicle to run in the target area so as to realize the automatic driving function corresponding to the target driving application.
For example, after obtaining the target environment data in the target area, the edge cloud server performs calculation based on the target environment data to obtain a target control instruction for controlling the target vehicle to operate in the target area. More specifically, for example, path planning instructions, etc. And then, after the target control instruction is sent to the target vehicle, the target vehicle equivalently obtains a server response corresponding to the first request information, and further, an automatic driving function corresponding to the first request information can be executed based on the target control instruction, so that the automatic driving function based on the vehicle cloud technology is realized. In the process, the edge cloud server can obtain the environmental data in real time thanks to the low-delay communication between the distributed edge cloud server and the registered equipment, so that in the process of responding to the first request information and generating the target control instruction, the target environmental data corresponding to the first request information is obtained, calculation is performed on the basis of the target environmental data, and when the target control instruction for the target vehicle is generated, the purpose of expanding the perception range of the target vehicle is achieved, and meanwhile, compared with the scheme that the target vehicle directly obtains the environmental data from the registered equipment or obtains the environmental data from the central server, the delay is low, the real-time performance is good, the stability is high, and the target vehicle can achieve a higher-level automatic driving function.
In the embodiment, by receiving first request information sent by a target vehicle, the first request information is used for indicating a digital twin corresponding to the target vehicle to run a target driving application; acquiring corresponding target environment data according to the first request information, wherein the target environment data are uploaded by the registration equipment in the target area and represent perception information of the registration equipment to the environment where the registration equipment is located; and generating a target control instruction according to the target environment data, wherein the target control instruction is used for controlling the target vehicle to run in the target area so as to realize the automatic driving function corresponding to the target driving application. Because the target environment data representing the perception information uploaded by other registered devices in the target area is obtained based on the first request information, which is equivalent to expanding the perception range of the target vehicle by using the perception capabilities of other registered devices, the target control instruction generated based on the target environment data can accurately realize the automatic driving function corresponding to the target driving application, and the driving safety and reliability of the target vehicle are improved.
Fig. 5 is a flowchart of a global vehicle control method according to another embodiment of the present application, and as shown in fig. 5, the global vehicle control method according to this embodiment further refines steps S102 to S103 on the basis of the global vehicle control method according to the embodiment shown in fig. 2, and adds a step of outputting target environment data, so that the global vehicle control method according to this embodiment includes the following steps:
step S201 is to receive first request information sent by the target vehicle, where the first request information is used to instruct a digital twin corresponding to the target vehicle to run the target driving application.
Step S202, positioning information of the target vehicle is obtained.
And step S203, determining a target area according to the positioning information of the target vehicle and the application identifier of the target driving application.
In one possible implementation manner, the first request information sent by the target vehicle includes an application identifier representing the target driving application. After receiving the first request information, the edge cloud server determines a target area corresponding to the first request information through the positioning information of the target vehicle and the application identifier of the target driving application. Specifically, first, the positioning information of the target vehicle may be independently and synchronously stored in the edge cloud server. In a possible implementation manner, the edge cloud server synchronously obtains the real-time positioning information of the target vehicle through a built-in environment data model, and the specific implementation process is described in the previous embodiment and is not described herein again.
Then, according to the application identifier of the target driving application, a corresponding necessary sensing range is determined by taking the position coordinate (determined by the positioning information) of the target vehicle as the center or starting point, and the total necessary sensing range required by different driving applications (namely different application identifiers) is different. For example, the application identifier of the target driving application is #01, and the corresponding target area is a range of N meters in front and back with the position coordinates of the target vehicle as the center; the application designation of the target driving application is #02, and the corresponding target area is a range M meters ahead from the position coordinates of the target vehicle, where N, M is a positive integer for example.
Fig. 6 is a schematic diagram of a target area corresponding to different applications according to an embodiment of the present application, as shown in fig. 6, for example, when a target vehicle performs an adaptive cruise function, the corresponding target area is a road within a distance of 100 meters ahead from a start point of a position coordinate of the target vehicle; then, the target vehicle performs the automatic driving function switching, and the target vehicle starts the automatic navigation driving function, and the automatic navigation driving function needs to be executed to achieve the purpose of avoiding congestion and the like by considering the optimal navigation route (performing long-distance route planning) while the vehicle is automatically traveling (performing short-distance vehicle pedestrian avoidance), and therefore the corresponding target area is a road within 10 kilometers around the position coordinates of the target vehicle as the center.
In the step of this embodiment, different target driving applications are correspondingly applied to a target area matched with data required by functions of the target area, and then the environmental data in the target area is acquired based on the target area, so that the environmental data is accurately acquired, the phenomenon that the server is overloaded due to acquisition of too much data is avoided, and the real-time performance and stability of system response are improved.
And step S204, acquiring corresponding target environment data according to the target area.
For example, after the target area is determined, in one possible implementation manner, the environment data uploaded by all the registered devices in the target area may be processed as target environment data, so as to achieve an optimal sensing effect with the maximum redundancy in the target area.
As shown in fig. 7, in another possible implementation manner, the specific implementation step of step S204 includes:
step S2041, determining a corresponding target data type based on the application identifier.
Step S2042, based on the target data type and the target area, target registration equipment corresponding to the target data type in the target area is obtained.
Step S2043, target environment data uploaded by at least one target registration device is obtained.
For example, for different automatic driving functions, in addition to possibly corresponding to different necessary perception ranges (different ranges of the current area), different sensor data types, i.e. target data types, are also possible. For example, for a lane keeping function in an automatic driving function, a corresponding target driving application needs to use image data; for the obstacle automatic avoidance function in the automatic driving function, radar data is required to be used for the corresponding target driving application. Therefore, when acquiring the environment data in the target area, first determining the corresponding target data type based on the application identifier, for example, only image data is needed, determining a registered device capable of uploading the image data in the target area as a target registered device, for example, a smart car with a camera, a road side device, and the like, and determining a registered device not uploading the image data in the target area as a target registered device, for example, a mobile terminal (only uploading position information), a smart vehicle without a high definition camera (only uploading radar data). And then, target environment data uploaded by the target registration equipment is obtained, so that accurate data acquisition based on data types is realized, the data processing amount is reduced, the data processing efficiency is improved, the system load is reduced, and the real-time performance and the stability of system response are improved.
Step S205, acquiring first environmental data and second environmental data in the target environmental data, wherein the first environmental data represents road traffic flow information in the target area; the second environmental data characterizes a road environment surrounding the target vehicle.
Step S206, generating a target control command according to the first environment data and the second environment data.
Illustratively, the target environment data obtained by the edge cloud server includes data representing road environment characteristics in different dimensions, that is, first environment data representing road traffic information in the target area and second environment data representing a road environment around the target vehicle. The first environmental data is equivalent to macroscopic data for global vehicle driving planning, and the second environmental data is equivalent to microscopic data for real-time driving control of a single vehicle. By combining the first environmental data and the second environmental data, a control instruction reasonable at both a macro level and a micro level can be generated to realize control of the target vehicle.
Illustratively, as shown in fig. 8, the specific implementation steps of step S206 include:
step S2061, generating first path planning information according to the first environment data.
Step S2062, based on the first path planning information, generating second path planning information according to the first environment data.
And step S2063, generating a target control instruction according to the second path planning information, wherein the target control instruction is used for controlling the target vehicle to automatically run along the path corresponding to the second path planning information.
The first environment data comprises signal light data and traffic flow data uploaded by registered equipment in the target area; the second environment data includes image data and/or radar data uploaded by registered devices within a preset range of the target vehicle.
Fig. 9 is a schematic diagram of a process of obtaining and receiving the first environment data and the second environment data by the edge cloud server, and details of the process are described with reference to fig. 9, in a driving process of the target vehicle, after sending the first request information to the edge cloud server, the edge cloud server first obtains signal light data uploaded by roadside devices (shown as intelligent signal lights in the figure) in the target area, and obtains traffic data according to the position information of each intelligent vehicle in the target area (it is understood that, in other implementation manners, only one of the signal light data and the traffic data may be obtained). And judging the congestion condition of the road based on the signal lamp data and the traffic flow data, and obtaining a navigation path which can avoid the congestion or has the lowest congestion degree and the least time consumption, namely first path planning information based on the current position information of the target vehicle. Meanwhile, the edge cloud server obtains image data and radar data uploaded by registration equipment within a preset range of the target vehicle, such as image data and radar data uploaded by an intelligent automobile within 100 meters of the target vehicle and positioning information uploaded by an intelligent mobile phone, and plans a more specific driving path on the basis of the first path planning information on the basis of the information, so that the purposes of safe following and lane changing of other vehicles within the preset range, avoidance of pedestrians and the like are achieved. And finally, generating a corresponding target control instruction for controlling the target vehicle to run based on the second path specification information, and sending the target control instruction to the target vehicle. In the subsequent steps, after the target vehicle receives the target control instruction, the target vehicle can drive the vehicle based on the driving path corresponding to the second path specification information corresponding to the target control instruction, so that the target vehicle can realize the functions of safe vehicle following, lane changing, obstacle avoidance and the like in the specific driving process while avoiding congestion and shortening the driving time.
In this embodiment, the edge cloud server obtains corresponding planning information in different dimensions by obtaining the first environment data and the second environment data, and generates a target control instruction to control the target vehicle based on the planning information in different dimensions, so that the target vehicle can realize a high-level automatic driving function.
And step S207, sending the target control instruction to the target vehicle so as to control the target vehicle to automatically run along the path corresponding to the second path planning information.
Exemplarily, after step S207, the method further includes:
step S208, first notification information is sent to the registration device, and the first notification information represents the position of the target vehicle or road traffic information in the target area.
Illustratively, the specific implementation manner of step S208 includes at least one of the following: the method comprises the steps of sending the position of a target vehicle to a mobile terminal, sending road vehicle flow information in a target area to road side equipment, and sending the position of the target vehicle and/or the road vehicle flow information in the target area to an intelligent automobile.
Specifically, after obtaining each target environment data, the edge cloud server may send, to the registration device, first notification information representing a position of the target vehicle or road traffic information, in addition to sending the corresponding target control instruction to the target vehicle that sends the first request information. For example, the edge cloud server sends the position of the target vehicle to a smart phone (mobile terminal) through the first notification information, so that an operation user of the smart phone can know the position of the target vehicle to perform avoidance or wait. For another example, the edge cloud server sends the traffic flow information of the road in the target area to an intelligent signal lamp (roadside device), so that the intelligent signal lamp can adjust the time interval of the signal lamp based on the traffic flow of the road, and road congestion is reduced. For another example, the edge cloud server sends the position of the target vehicle and/or the road traffic information in the target area to other intelligent vehicles, so that the other intelligent vehicles can avoid the target vehicle or display the road congestion condition in the target area. In this embodiment, by the step of sending the first notification information to the registration device, global sharing of the environmental data is achieved, and overall operation efficiency of the vehicles in the target area is improved.
Optionally, the method further comprises:
step S209, in response to the second request information sent by the external system, sending target environment data to the external system through a preset communication middleware, so that the external system generates a three-dimensional visualized road condition map of the target area based on the target environment data.
In one possible implementation manner, the edge cloud server may further send the obtained target environment data to an external system, for example, an urban road management system, a road monitoring system, or the like, in response to second request information sent by the external system. Because the environmental data collected by the registration equipment comprise multidimensional data such as laser radar data and high-definition image data, for example, the environmental data are restored to obtain a three-dimensional visual road condition map of a target area, so that the real-time monitoring of the driving condition of the vehicle in the target area is realized, and the accuracy and the real-time performance of the vehicle monitoring are improved.
In this embodiment, the implementation manner of step S201 is the same as the implementation manner of step S101 in the embodiment shown in fig. 2 of this application, and is not described in detail here.
Fig. 10 is a schematic structural diagram of a global vehicle control device according to an embodiment of the present application, and as shown in fig. 10, a global vehicle control device 3 according to this embodiment includes:
the transceiving module 31 is configured to receive first request information sent by the target vehicle, where the first request information is used to instruct a digital twin corresponding to the target vehicle to run a target driving application;
the obtaining module 32 is configured to obtain corresponding target environment data according to the first request information, where the target environment data is uploaded by a registered device in the target area and represents perception information of the registered device on an environment where the registered device is located;
and the generating module 33 is configured to generate a target control instruction according to the target environment data, where the target control instruction is used to control the target vehicle to operate in the target area, so as to implement an automatic driving function corresponding to the target driving application.
In a possible implementation manner, the first request information includes an application identifier representing the target driving application; the obtaining module 32 is specifically configured to: acquiring positioning information of a target vehicle; determining a target area according to the positioning information of the target vehicle and the application identifier of the target driving application; and acquiring corresponding target environment data according to the target area.
In a possible implementation manner, when the obtaining module 32 obtains the corresponding target environment data according to the target area, it is specifically configured to: determining a corresponding target data type based on the application identification; obtaining target registration equipment corresponding to the target data type in the target area based on the target data type and the target area; and acquiring target environment data uploaded by at least one target registration device.
In a possible implementation manner, the obtaining module 32 is specifically configured to: acquiring a preset environment data model, wherein the environment data model comprises position information of a target vehicle and environment data uploaded by a registration device; and calling the environment data model according to the first request information to obtain target environment data corresponding to the position information of the target vehicle.
In one possible implementation manner, the target environment data includes first environment data and second environment data, where the first environment data represents road traffic flow information in the target area; the second environment data represents a road environment around the target vehicle, and the generating module 33 is specifically configured to: generating first path planning information according to the first environment data; generating second path planning information according to the first environment data on the basis of the first path planning information; and generating a target control instruction according to the second path planning information, wherein the target control instruction is used for controlling the target vehicle to automatically run along the path corresponding to the second path planning information.
In one possible implementation manner, the first environment data includes signal light data and traffic flow data uploaded by registered devices in the target area; the second environment data includes image data and/or radar data uploaded by registered devices within a preset range of the target vehicle.
In one possible implementation, the registration device includes at least one of: the system comprises an intelligent automobile, roadside equipment and a mobile terminal; a transceiver module 31, further configured to: and sending first notification information to the registration device, wherein the first notification information represents the position of the target vehicle or road traffic information in the target area.
In a possible implementation manner, when sending the first notification information to the registered device, the transceiver module 31 is specifically configured to: the method comprises the steps of sending the position of a target vehicle to a mobile terminal, sending road vehicle flow information in a target area to road side equipment, and sending the position of the target vehicle and/or the road vehicle flow information in the target area to an intelligent automobile.
In a possible implementation manner, the transceiver module 31 is further configured to: and responding to the second request information sent by the external system, and sending target environment data to the external system through a preset communication middleware so that the external system generates a three-dimensional visual road condition map of the target area based on the target environment data.
The transceiver module 31, the obtaining module 32 and the generating module 33 are connected in sequence. The global vehicle control device 3 provided in this embodiment may execute the technical solution of the method embodiment shown in any one of fig. 2 to 9, and the implementation principle and the technical effect are similar, and are not described herein again.
Fig. 11 is a schematic view of an electronic device according to an embodiment of the present application, and as shown in fig. 11, an electronic device 4 according to the embodiment includes: a processor 41, and a memory 42 communicatively coupled to the processor 41.
Wherein the memory 42 stores computer-executable instructions;
the processor 41 executes computer-executable instructions stored in the memory 42 to implement the global vehicle control method provided in any one of the embodiments corresponding to fig. 2 to 9 of the present application.
The memory 42 and the processor 41 are connected by a bus 43.
The relevant descriptions and effects corresponding to the steps in the embodiments corresponding to fig. 2 to fig. 9 can be understood, and are not described in detail herein.
One embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the global vehicle control method provided in any one of the embodiments corresponding to fig. 2 to 9 of the present application.
The computer readable storage medium may be, among others, ROM, Random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
One embodiment of the present application provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the global vehicle control method provided in any one of the embodiments corresponding to fig. 2 to fig. 9 of the present application.
Fig. 12 is a block diagram of a terminal device 800, which may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc., according to an exemplary embodiment of the present application.
Terminal device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communications component 816.
The processing component 802 generally controls overall operation of the terminal device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the terminal device 800. Examples of such data include instructions for any application or method operating on terminal device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of terminal device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the terminal device 800.
The multimedia component 808 includes a screen providing an output interface between the terminal device 800 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. When the terminal device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive an external audio signal when the terminal device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Sensor component 814 includes one or more sensors for providing various aspects of state assessment for terminal device 800. For example, sensor assembly 814 can detect an open/closed state of terminal device 800, the relative positioning of components, such as a display and keypad of terminal device 800, sensor assembly 814 can also detect a change in position of terminal device 800 or a component of terminal device 800, the presence or absence of user contact with terminal device 800, orientation or acceleration/deceleration of terminal device 800, and a change in temperature of terminal device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Communication component 816 is configured to facilitate communications between terminal device 800 and other devices in a wired or wireless manner. The terminal device 800 may access a wireless network based on a communication standard, such as WiFi, 3G, 4G, 5G, or other standard communication networks, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the methods provided by any of the embodiments of fig. 2-9 of the present application.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as the memory 804 including instructions, executable by the processor 820 of the terminal device 800 to perform the method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
An embodiment of the present application further provides a non-transitory computer-readable storage medium, and when a processor of a terminal device executes instructions in the storage medium, the terminal device 800 is enabled to execute the method provided in any embodiment corresponding to fig. 2 to fig. 9 of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of global vehicle control, the method comprising:
receiving first request information sent by a target vehicle, wherein the first request information is used for indicating a digital twin corresponding to the target vehicle to run a target driving application;
acquiring corresponding target environment data according to the first request information, wherein the target environment data are uploaded by a registered device in a target area and represent perception information of the registered device to the environment where the registered device is located;
generating a target control instruction according to the target environment data, wherein the target control instruction is used for controlling the target vehicle to run in the target area so as to realize an automatic driving function corresponding to the target driving application;
the first request information comprises an application identifier representing the target driving application; acquiring corresponding target environment data according to the first request information, wherein the acquisition comprises the following steps:
acquiring positioning information of the target vehicle;
determining the target area according to the positioning information of the target vehicle and the application identifier of the target driving application;
determining a corresponding target data type based on the application identification, wherein the target data type comprises at least one of the following items: image data, radar data, and location information;
obtaining target registration equipment corresponding to the target data type in the target area based on the target data type and the target area;
and acquiring target environment data uploaded by at least one target registration device.
2. The method of claim 1, wherein obtaining corresponding target environment data according to the first request message comprises:
acquiring a preset environment data model, wherein the environment data model comprises the position information of the target vehicle and environment data uploaded by a registration device;
and calling the environment data model according to the first request information to obtain target environment data corresponding to the position information of the target vehicle.
3. The method of claim 1, wherein the target environmental data comprises first environmental data and second environmental data, wherein the first environmental data characterizes road traffic information within the target area; the second environmental data characterizes a road environment around the target vehicle, and the generating of the target control command according to the target environmental data includes:
generating first path planning information according to the first environment data;
generating second path planning information according to the first environment data on the basis of the first path planning information;
and generating a target control instruction according to the second path planning information, wherein the target control instruction is used for controlling a target vehicle to automatically run along a path corresponding to the second path planning information.
4. The method of claim 3,
the first environment data comprise signal lamp data and traffic flow data uploaded by the registered equipment in the target area;
the second environment data comprises image data and/or radar data uploaded by registered equipment within a preset range of the target vehicle.
5. The method of any of claims 1-4, wherein the registration device comprises at least one of: the system comprises an intelligent automobile, roadside equipment and a mobile terminal;
the method further comprises the following steps:
and sending first notification information to the registration device, wherein the first notification information represents the position of the target vehicle, or road traffic information in the target area.
6. The method of claim 5, wherein the sending the first notification information to the registered device comprises at least one of:
transmitting the location of the target vehicle to the mobile terminal,
transmitting road traffic information within the target area to the roadside device,
and sending the position of the target vehicle and/or the road traffic flow information in the target area to the intelligent automobile.
7. The method according to any one of claims 1-4, further comprising:
and responding to second request information sent by an external system, sending the target environment data to the external system through a preset communication middleware, so that the external system generates a three-dimensional visual road condition map of the target area based on the target environment data.
8. A global vehicle control device, comprising:
the system comprises a transceiving module, a control module and a display module, wherein the transceiving module is used for receiving first request information sent by a target vehicle, the first request information is used for indicating a digital twin corresponding to the target vehicle to run a target driving application, and the first request information comprises an application identifier representing the target driving application;
the acquisition module is used for acquiring corresponding target environment data according to the first request information, wherein the target environment data is uploaded by the registered equipment in the target area and represents the perception information of the registered equipment to the environment where the registered equipment is located;
the generating module is used for generating a target control instruction according to the target environment data, wherein the target control instruction is used for controlling the target vehicle to run in the target area so as to realize an automatic driving function corresponding to the target driving application;
the obtaining module is specifically configured to obtain positioning information of the target vehicle, determine the target area according to the positioning information of the target vehicle and an application identifier of the target driving application, and determine a corresponding target data type based on the application identifier, where the target data type includes at least one of: the method comprises the steps of obtaining target registration equipment corresponding to a target data type in a target area based on a target data type and the target area, and obtaining at least one target environment data uploaded by the target registration equipment, wherein the image data, the radar data and the position information are obtained.
9. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer execution instructions;
the processor executes computer-executable instructions stored by the memory to implement the global vehicle control method of any one of claims 1-7.
10. A computer-readable storage medium having computer-executable instructions stored therein, which when executed by a processor, are configured to implement the global vehicle control method of any one of claims 1 to 7.
CN202210738441.6A 2022-06-28 2022-06-28 Global vehicle control method and device, electronic equipment and storage medium Active CN114802311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210738441.6A CN114802311B (en) 2022-06-28 2022-06-28 Global vehicle control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210738441.6A CN114802311B (en) 2022-06-28 2022-06-28 Global vehicle control method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114802311A CN114802311A (en) 2022-07-29
CN114802311B true CN114802311B (en) 2022-09-13

Family

ID=82523482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210738441.6A Active CN114802311B (en) 2022-06-28 2022-06-28 Global vehicle control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114802311B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117177213B (en) * 2023-09-25 2024-02-13 北京皓宽网络科技有限公司 Data processing method and system based on data exchange

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110083163A (en) * 2019-05-20 2019-08-02 三亚学院 A kind of 5G C-V2X bus or train route cloud cooperation perceptive method and system for autonomous driving vehicle
CN110335488A (en) * 2019-07-24 2019-10-15 深圳成谷科技有限公司 A kind of Vehicular automatic driving method and apparatus based on bus or train route collaboration
EP3568843A2 (en) * 2017-01-10 2019-11-20 Cavh Llc Connected automated vehicle highway systems and methods
CN111781933A (en) * 2020-07-27 2020-10-16 扬州大学 High-speed automatic driving vehicle implementation system and method based on edge calculation and spatial intelligence
CN113581211A (en) * 2021-08-30 2021-11-02 深圳清航智行科技有限公司 Vehicle driving control method, system and device and readable storage medium
CN114103996A (en) * 2021-11-25 2022-03-01 国汽智控(北京)科技有限公司 Automatic driving control method, device and equipment based on shared sensing data
CN114179829A (en) * 2021-12-24 2022-03-15 中汽创智科技有限公司 Multi-end cooperative vehicle driving method, device, system and medium
CN114419572A (en) * 2022-03-31 2022-04-29 国汽智控(北京)科技有限公司 Multi-radar target detection method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3568843A2 (en) * 2017-01-10 2019-11-20 Cavh Llc Connected automated vehicle highway systems and methods
CN110083163A (en) * 2019-05-20 2019-08-02 三亚学院 A kind of 5G C-V2X bus or train route cloud cooperation perceptive method and system for autonomous driving vehicle
CN110335488A (en) * 2019-07-24 2019-10-15 深圳成谷科技有限公司 A kind of Vehicular automatic driving method and apparatus based on bus or train route collaboration
CN111781933A (en) * 2020-07-27 2020-10-16 扬州大学 High-speed automatic driving vehicle implementation system and method based on edge calculation and spatial intelligence
CN113581211A (en) * 2021-08-30 2021-11-02 深圳清航智行科技有限公司 Vehicle driving control method, system and device and readable storage medium
CN114103996A (en) * 2021-11-25 2022-03-01 国汽智控(北京)科技有限公司 Automatic driving control method, device and equipment based on shared sensing data
CN114179829A (en) * 2021-12-24 2022-03-15 中汽创智科技有限公司 Multi-end cooperative vehicle driving method, device, system and medium
CN114419572A (en) * 2022-03-31 2022-04-29 国汽智控(北京)科技有限公司 Multi-radar target detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114802311A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
US10669980B2 (en) Method, apparatus, and system for launching engine start-stop function in vehicles
US20190052842A1 (en) System and Method for Improved Obstable Awareness in Using a V2x Communications System
CN109017554B (en) Driving reminding method and device and computer readable storage medium
JP2017536640A (en) Method, apparatus, program and recording medium
CN113442929A (en) Vehicle control method, device, equipment and computer readable storage medium
CN110979332B (en) Control method and device of intelligent automobile and storage medium
CN111516690B (en) Control method and device of intelligent automobile and storage medium
CN113479195A (en) Method for automatic valet parking and system for carrying out said method
CN114802311B (en) Global vehicle control method and device, electronic equipment and storage medium
CN115061808A (en) Vehicle cloud computing power scheduling method and device, electronic equipment and storage medium
CN114882464B (en) Multi-task model training method, multi-task processing method, device and vehicle
CN114419572B (en) Multi-radar target detection method and device, electronic equipment and storage medium
CN114935334B (en) Construction method and device of lane topological relation, vehicle, medium and chip
CN117827997A (en) Map rendering method, map updating device and server
CN110969858B (en) Traffic accident processing method and device, storage medium and electronic equipment
EP4346247A1 (en) Data interaction method and apparatus, vehicle, readable storage medium and chip
CN115170630B (en) Map generation method, map generation device, electronic equipment, vehicle and storage medium
CN114937351B (en) Motorcade control method and device, storage medium, chip, electronic equipment and vehicle
CN115164910B (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN114863717A (en) Parking space recommendation method and device, storage medium and vehicle
CN116834767A (en) Motion trail generation method, device, equipment and storage medium
CN111785044A (en) Traffic light control method and device
CN112231019A (en) Map engine architecture based on distributed microservice
CN115221260B (en) Data processing method, device, vehicle and storage medium
CN115214720A (en) Model determination method, device, equipment and storage medium applied to automatic driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant