CN114987494A - Driving scene processing method and device and electronic equipment - Google Patents

Driving scene processing method and device and electronic equipment Download PDF

Info

Publication number
CN114987494A
CN114987494A CN202210601113.1A CN202210601113A CN114987494A CN 114987494 A CN114987494 A CN 114987494A CN 202210601113 A CN202210601113 A CN 202210601113A CN 114987494 A CN114987494 A CN 114987494A
Authority
CN
China
Prior art keywords
scene
data
scanning
automatic driving
autonomous vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210601113.1A
Other languages
Chinese (zh)
Inventor
饶文龙
刘颖楠
宫国浩
薛晶晶
张亚玲
邢亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202210601113.1A priority Critical patent/CN114987494A/en
Publication of CN114987494A publication Critical patent/CN114987494A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0002Automatic control, details of type of controller or control system architecture
    • B60W2050/0004In digital systems, e.g. discrete-time systems involving sampling
    • B60W2050/0005Processor details or data handling, e.g. memory registers or chip architecture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a driving scene processing method and device and electronic equipment, and relates to the field of artificial intelligence, in particular to the technical fields of automatic driving, intelligent transportation and the like. The specific implementation scheme is as follows: acquiring driving state data of an autonomous vehicle; detecting whether the autonomous vehicle satisfies a predetermined scene rule based on the driving status data; and under the condition that the detected automatic driving vehicle meets the preset scene rule, determining that the automatic driving vehicle enters the preset scene corresponding to the preset scene rule, and recording scene data of the automatic driving vehicle in the preset scene on the automatic driving vehicle.

Description

Driving scene processing method and device and electronic equipment
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a driving scene processing method and apparatus, and an electronic device, and to the technical fields of automatic driving, intelligent transportation, and the like.
Background
In the related art, scene data mining can be carried out on the data of the falling plate only after the automatic driving vehicle is recovered to the machine room and all the data fall plate, and the recorded data are not accurate enough.
Therefore, the related art has a technical problem that the processing efficiency of the automatic driving scene data is low and the accuracy is not sufficient.
Disclosure of Invention
The disclosure provides a method and a device for driving scene processing and electronic equipment.
According to an aspect of the present disclosure, there is provided a driving scene processing method, including: acquiring driving state data of an autonomous vehicle; detecting whether the autonomous vehicle satisfies a predetermined scene rule based on the driving status data; and under the condition that the detected automatic driving vehicle meets the preset scene rule, determining that the automatic driving vehicle enters the preset scene corresponding to the preset scene rule, and recording scene data of the automatic driving vehicle in the preset scene on the automatic driving vehicle.
Optionally, obtaining driving state data of the autonomous vehicle includes: acquiring running state data of the autonomous vehicle through a sensor on the autonomous vehicle; positioning the automatic driving vehicle through a positioning device on the automatic driving vehicle to obtain positioning data of the automatic driving vehicle; determining driving prediction data for the autonomous vehicle based on the driving state data and the positioning data; based on the driving state data, the positioning data, and the driving prediction data, driving state data of the autonomous vehicle is determined.
Optionally, the recording, on the autonomous vehicle, scene data of the autonomous vehicle in a predetermined scene includes: recording, on an autonomous vehicle, a start time for the autonomous vehicle to enter a predetermined scene; recording process data of the autonomous vehicle in a predetermined scene on the autonomous vehicle; recording, on the autonomous vehicle, an end time at which the autonomous vehicle ends the predetermined scene; wherein the scene data includes: start time, process data and end time.
Optionally, after the recording, on the autonomous vehicle, the scene data of the autonomous vehicle in the predetermined scene, the method further includes: and performing slice storage on the recorded scene data on the automatic driving vehicle.
Optionally, after recording scene data of the autonomous vehicle in a predetermined scene on the autonomous vehicle, the method includes: and uploading the scene data to the cloud side equipment, so that the cloud side equipment detects the vehicle performance of the automatic driving vehicle based on the scene data of the automatic driving vehicle.
According to another aspect of the present disclosure, there is provided a driving scene processing method, including: receiving scene data sent by an automatic driving vehicle, wherein the scene data is data recorded when the automatic driving vehicle meets a preset scene rule and enters a corresponding preset scene; setting a label for scene data; and storing the scene data into a scene library of the cloud-side equipment based on the label.
Optionally, the method further includes: and detecting the performance of the automatic driving vehicle based on the scene data to obtain a performance detection result.
Optionally, based on the scene data, detecting performance of the autonomous vehicle to obtain a performance detection result, including: and detecting the automatic driving performance of the automatic driving vehicle in a preset scene based on the scene data to obtain a passing result of the automatic driving vehicle in the preset scene.
According to another aspect of the present disclosure, there is provided a driving scene processing apparatus including: the first acquisition module is used for acquiring the identification information of the scanning equipment; the device comprises a first sending module and a second sending module, wherein the first sending module is used for sending a scanning request to the target device, the scanning request carries identification information and a storage path, the scanning request is used for indicating the target device to send a scanning instruction to the scanning device, and the storage path is used for storing a scanning result obtained by scanning of the scanning device.
Optionally, the apparatus further comprises: the first receiving module is used for receiving a scanning result sent by the target equipment; and the second sending module is used for sending a deleting instruction for deleting the scanning result to the target equipment under the condition that the scanning result does not meet the preset requirement.
Optionally, the first obtaining module includes: the first acquisition unit is used for scanning the two-dimensional code containing the identification information of the scanning equipment and acquiring the identification information of the scanning equipment.
Optionally, the apparatus further comprises: the second acquisition module is used for acquiring the connection information of the target equipment; and the first binding module is used for establishing a binding relationship with the target equipment based on the connection information.
Optionally, the second obtaining module includes: and the second acquisition unit is used for acquiring the connection information of the target equipment by scanning the connection information two-dimensional code, wherein the connection information two-dimensional code is generated based on the transmission control protocol application on the target equipment.
According to another aspect of the present disclosure, there is provided a scanning apparatus including: the second receiving module is used for receiving a scanning request sent by the mobile equipment, wherein the scanning request comprises the identification information of the scanning equipment and a storage path; the third sending module is used for responding to the scanning request and sending a scanning instruction to the scanning equipment; and the third receiving module is used for receiving a scanning result obtained after the scanning device scans based on the scanning instruction and storing the scanning result to the storage path.
Optionally, the apparatus further comprises: the fourth sending module is used for sending the scanning result to the mobile equipment; and the fourth receiving module is used for receiving a deleting instruction sent by the mobile equipment under the condition that the mobile equipment determines that the scanning result does not meet the preset requirement, and deleting the scanning result according to the deleting instruction.
Optionally, the apparatus further comprises: the first generation module is used for generating a connection information two-dimensional code based on the transmission control protocol application; and the second binding module is used for establishing a binding relationship with the mobile equipment based on the connection information two-dimensional code.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the methods described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any of the above.
According to another aspect of the disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of any of the above.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart of a first driving scenario processing method provided according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a second driving scenario processing method according to an embodiment of the disclosure;
FIG. 3 is a flowchart of a real vehicle automatic mining method for AD scenario of an L4 autonomous vehicle provided according to an alternative embodiment of the present disclosure;
fig. 4 is a block diagram of a driving scene processing apparatus provided according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of a scanning device provided according to an embodiment of the present disclosure;
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Description of the terms
L4, a level of autopilot at which the automated system is able to complete driving tasks and monitor driving conditions in certain circumstances and specific conditions, at which stage all tasks related to driving are not relevant to the driver and occupant to the extent that autopilot can be operated.
Landing refers to writing data to a magnetic disk (storage medium).
The BOS is a stable, safe, efficient and highly extensible cloud storage service, and supports multiple storage types such as standard, low-frequency, cold and filing storage to meet the storage requirements of multiple scenes. The user can store any number and form of unstructured data into the BOS and manage and process the data.
In an embodiment of the present disclosure, a driving scenario processing method is provided, and fig. 1 is a flowchart of a first driving scenario processing method provided according to an embodiment of the present disclosure, as shown in fig. 1, the flowchart includes the following steps:
step S102, acquiring driving state data of the automatic driving vehicle;
step S104, detecting whether the automatic driving vehicle meets a preset scene rule or not based on the driving state data;
and step S106, under the condition that the automatic driving vehicle is detected to meet the preset scene rule, determining that the automatic driving vehicle enters the preset scene corresponding to the preset scene rule, and recording the scene data of the automatic driving vehicle in the preset scene on the automatic driving vehicle.
Through the steps, whether the automatic driving vehicle meets the preset scene rule or not is detected, and the scene data corresponding to the preset scene rule is automatically recorded on the automatic driving vehicle under the condition that the preset scene rule is met, because whether the current scene meets the preset rule or not is judged in advance before the data recording is started, and the automatic recording is started only under the condition that the preset scene rule is met, the recording of the scene data is not blind or unscreened, and no additional processing process is needed for the recorded data to judge whether the recorded data meets the requirements or not.
It should be noted that the preset scene rule may be used as a method for directly obtaining a driving scene according to the state data of the autonomous driving vehicle, and in the embodiment of the present disclosure, a manner of presetting a rule related to the driving scene is adopted, so that it is convenient to determine whether the current autonomous driving scene belongs to the driving scene to be recorded, and thus, automatic and accurate recording of driving data meeting the scene requirements is achieved.
As an alternative embodiment, acquiring driving state data of an autonomous vehicle includes: acquiring running state data of the autonomous vehicle through a sensor on the autonomous vehicle; positioning the automatic driving vehicle through a positioning device on the automatic driving vehicle to obtain positioning data of the automatic driving vehicle; determining driving prediction data for the autonomous vehicle based on the driving state data and the positioning data; based on the driving state data, the positioning data, and the driving prediction data, driving state data of the autonomous vehicle is determined. By acquiring the current driving state data and the positioning data of the automatic driving vehicle, the driving data of the automatic driving vehicle can be predicted according to the driving state data and the positioning data, and then the current driving state data of the automatic driving vehicle, such as straight driving, left turning, backing and the like, is determined according to the driving state data, the positioning data and the driving prediction data.
It should be noted that the automatic driving action to be performed by the automatic driving vehicle during driving can be predicted based on the driving state data and the positioning data, for example, a driving action of braking can be performed when it is determined by using the sensor and the positioning device that the automatic driving vehicle is approaching an intersection and the signal light is red, and the like.
It should be noted that the sensor may be a sensor for acquiring data of an internal driving state of the autonomous vehicle, such as "acceleration", "deceleration", "reverse", and the like, or may be a sensor disposed in the autonomous vehicle for acquiring external environment data, for example, the sensor may be used for acquiring a driving environment around the autonomous vehicle, such as "night driving", "rainy driving", and the like, so as to perform more accurate data recording.
In the above process of determining the driving prediction data, the driving prediction data may also be predicted according to the destination of the automatic driving and the real-time positioning information, in combination with the map data, for example, according to the route to the destination, it is determined that the automatic driving vehicle is about to turn left, and so on.
As an alternative embodiment, recording scene data of an autonomous vehicle in a predetermined scene on the autonomous vehicle includes: recording, on an autonomous vehicle, a start time for the autonomous vehicle to enter a predetermined scene; recording process data of the autonomous vehicle in a predetermined scene on the autonomous vehicle; recording, on an autonomous vehicle, an end time at which the autonomous vehicle ends a predetermined scene; wherein the scene data includes: start time, process data and end time. Under the condition that the automatic driving vehicle meets the preset scene rule, namely, the starting time when the automatic driving vehicle enters the preset scene, the ending time when the automatic driving vehicle leaves the preset scene and the process data of automatic driving of the automatic driving vehicle in the preset scene are automatically and accurately recorded.
As an alternative embodiment, after recording scene data of the autonomous vehicle in a predetermined scene on the autonomous vehicle, the method further includes: and performing slice storage on the recorded scene data on the automatic driving vehicle. By slicing and storing the recorded scene data, the scene data of the automatic driving vehicle in the current preset scene can be effectively stored, for example, the scene data can be stored in a storage medium or a cloud, and the sliced data can be labeled according to different automatic driving scenes during storage, so that the later query is facilitated.
As an alternative embodiment, after recording scene data of the autonomous vehicle in a predetermined scene on the autonomous vehicle, the method includes: and uploading the scene data to the cloud side equipment, so that the cloud side equipment detects the vehicle performance of the automatic driving vehicle based on the scene data of the automatic driving vehicle. After the scene data of the automatic driving vehicle in the preset scene is recorded, the recorded data can be uploaded to the cloud side equipment in real time in the recording process so that the cloud side can process the scene data in real time, and the scene data can be mined without the need of completely dropping the automatic driving scene data, so that the technical problems of low processing efficiency and insufficient accuracy of the automatic driving scene data in the related technology are solved.
It should be noted that the scene data uploaded to the cloud-side device may be uploaded by the autonomous driving vehicle in real time during driving, the uploaded result may be directly added to the autonomous driving scene database, or the uploaded result may be subjected to scene mining at the cloud end and then added to the autonomous driving scene database. The scene data in the automatic scene database can be used for carrying out simulation test on the automatic driving vehicle, and then the automatic driving capability of the automatic driving vehicle is represented according to the test result.
It should be noted that, in the embodiment of the present disclosure, slicing storage and uploading of the recorded data to the cloud-side device may be performed simultaneously, that is, storage of complete recorded data is not affected while real-time data mining is implemented.
In the embodiment of the present disclosure, a driving scenario processing method is further provided, and fig. 2 is a flowchart of a driving scenario processing method two provided according to the embodiment of the present disclosure, as shown in fig. 2, the flowchart includes the following steps:
step S202, receiving scene data sent by an automatic driving vehicle, wherein the scene data is data recorded when the automatic driving vehicle meets a preset scene rule and enters a corresponding preset scene;
step S204, setting a label for the scene data;
and step S206, storing the scene data into a scene library of the cloud-side equipment based on the label.
Through the steps, the labels are set for the data recorded when the received automatic driving vehicle meets the preset scene rules and enters the corresponding preset scenes, and the scene data are stored in the scene library of the cloud side equipment based on the labels, so that the orderly and efficient storage of the scene data of the automatic driving vehicle in the preset scenes can be realized.
As an alternative embodiment, the method further includes: and detecting the performance of the automatic driving vehicle based on the scene data to obtain a performance detection result. After the scene data of the autonomous vehicle in the preset scene is stored in the scene library of the cloud-side device, the performance of the autonomous vehicle can be detected by using the scene data in the scene library to determine the driving state condition of the autonomous vehicle in the preset scene, and further finish the evaluation of the performance of the autonomous vehicle.
As an alternative embodiment, detecting the performance of the autonomous vehicle based on the scene data to obtain a performance detection result includes: and detecting the automatic driving performance of the automatic driving vehicle in a preset scene based on the scene data to obtain a passing result of the automatic driving vehicle in the preset scene. When the performance of the automatic driving vehicle is detected, the passing condition of the automatic driving vehicle in different automatic driving scenes can be obtained to judge, for example, if the passing rate of the simulation result is high when the automatic driving vehicle performs automatic driving operation simulation by using scene data in a scene library, the performance of the automatic driving vehicle is better, and whether the performance of the automatic driving vehicle is improved or not can be represented based on the change of the passing condition of the scene operation simulation.
Based on the above embodiments and alternative embodiments, an alternative implementation is provided, which is described below.
Fig. 3 is a flowchart of an automatic mining method for an AD scene case of an L4 autonomous vehicle according to an alternative embodiment of the disclosure, and as shown in fig. 3, the method is described below using the flowchart:
1. during L4 autonomous driving of the vehicle;
2. deploying definition rules of scene data on the vehicle;
3. collecting perception, positioning and prediction data;
4. the automatic driving scene of the vehicle conforms to the defined rule;
5. the virtual man starts to automatically record data;
6. the data module slices the recorded data;
7. the BOS stores the slice record data of the vehicle end;
8. the cloud AD (automatic driving) scene aggregation system is used for tagging and storing uploaded scene data corresponding to the definition rules;
9. an AD new edition, which is simulated by using the collected scene data;
10. and calculating the simulation passing rate of scene data and judging the capacity condition of the new AD version.
Through this optional implementation of this disclosure, can accurately, automatically record the complicated scene data of L4 autopilot road operation according to presetting the rule, realize the construction of autopilot emulation high reduction degree scene database, promote the high in the clouds emulation ability of autopilot version greatly.
In an embodiment of the present disclosure, a driving scenario processing apparatus is further provided, and fig. 4 is a block diagram of a driving scenario processing apparatus provided according to an embodiment of the present disclosure, and as shown in fig. 4, the apparatus includes: a first obtaining module 41 and a first sending module 42, which are explained below.
A first obtaining module 41, configured to obtain identification information of a scanning device; a first sending module 42, connected to the first obtaining module 41, configured to send a scanning request to the target device, where the scanning request carries identification information and a saving path, the scanning request is used to instruct the target device to send a scanning instruction to the scanning device, and the saving path is used to save a scanning result obtained by scanning by the scanning device.
As an alternative embodiment, the apparatus further comprises: the first receiving module is used for receiving a scanning result sent by the target equipment; and the second sending module is used for sending a deleting instruction for deleting the scanning result to the target equipment under the condition that the scanning result does not meet the preset requirement.
As an alternative embodiment, the first obtaining module 41 includes: the first acquisition unit is used for scanning the two-dimensional code containing the identification information of the scanning equipment and acquiring the identification information of the scanning equipment.
As an alternative embodiment, the apparatus further comprises: the second acquisition module is used for acquiring the connection information of the target equipment; and the first binding module is used for establishing a binding relationship with the target equipment based on the connection information.
As an alternative embodiment, the second obtaining module includes: and the second acquisition unit is used for acquiring the connection information of the target equipment by scanning the connection information two-dimensional code, wherein the connection information two-dimensional code is generated based on the transmission control protocol application on the target equipment.
In an embodiment of the present disclosure, a scanning apparatus is further provided, and fig. 5 is a block diagram of a structure of the scanning apparatus provided according to the embodiment of the present disclosure, as shown in fig. 5, the scanning apparatus includes: a second receiving module 51, a third transmitting module 52 and a third receiving module 53, which will be described below.
A second receiving module 51, configured to receive a scanning request sent by a mobile device, where the scanning request includes identification information of the scanning device and a storage path; a third sending module 52, connected to the second receiving module 51, for sending a scanning instruction to the scanning device in response to the scanning request; and a third receiving module 53, connected to the third sending module 52, configured to receive a scanning result obtained after the scanning device performs scanning based on the scanning instruction, and store the scanning result in the storage path.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 executes the respective methods and processes described above, such as the driving scene processing method. For example, in some embodiments, the driving scenario processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 600 via ROM 602 and/or communications unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the driving scenario processing method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the driving scenario processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (19)

1. A driving scenario processing method, comprising:
acquiring driving state data of an autonomous vehicle;
detecting whether the autonomous vehicle satisfies a predetermined scene rule based on the driving state data;
and under the condition that the automatic driving vehicle is detected to meet the preset scene rule, determining that the automatic driving vehicle enters a preset scene corresponding to the preset scene rule, and recording scene data of the automatic driving vehicle in the preset scene on the automatic driving vehicle.
2. The method of claim 1, wherein the obtaining driving state data for an autonomous vehicle comprises:
collecting driving state data of the autonomous vehicle via a sensor on the autonomous vehicle;
positioning the automatic driving vehicle through a positioning device on the automatic driving vehicle to obtain positioning data of the automatic driving vehicle;
determining driving prediction data for the autonomous vehicle based on the driving state data and the positioning data;
determining driving state data for the autonomous vehicle based on the driving state data, the positioning data, and the driving prediction data.
3. The method of claim 1, wherein said recording scene data of the autonomous vehicle at the predetermined scene on the autonomous vehicle comprises:
recording, on the autonomous vehicle, a start time at which the autonomous vehicle enters the predetermined scene;
recording, on the autonomous vehicle, process data of the autonomous vehicle at the predetermined scene;
recording, on the autonomous vehicle, an end time at which the autonomous vehicle ends the predetermined scene;
wherein the scene data includes: the start time, the process data, and the end time.
4. The method of claim 1, wherein after recording scene data of the autonomous vehicle at the predetermined scene on the autonomous vehicle, further comprising:
and performing slice storage on the recorded scene data on the automatic driving vehicle.
5. The method of any of claims 1-4, wherein after recording scene data of the autonomous vehicle at the predetermined scene on the autonomous vehicle, comprising:
and uploading the scene data to cloud side equipment, so that the cloud side equipment detects the vehicle performance of the automatic driving vehicle based on the scene data of the automatic driving vehicle.
6. A driving scenario processing method, comprising:
receiving scene data sent by an automatic driving vehicle, wherein the scene data is data recorded by the automatic driving vehicle when the automatic driving vehicle meets a preset scene rule and enters a corresponding preset scene;
setting a label for the scene data;
and storing the scene data into a scene library of the cloud-side equipment based on the label.
7. The method of claim 6, further comprising:
and detecting the performance of the automatic driving vehicle based on the scene data to obtain a performance detection result.
8. The method of claim 7, wherein detecting performance of the autonomous vehicle based on the context data, resulting in a performance detection result, comprises:
and detecting the automatic driving performance of the automatic driving vehicle in the preset scene based on the scene data to obtain the passing result of the automatic driving vehicle in the preset scene.
9. A driving scenario processing apparatus, comprising:
the first acquisition module is used for acquiring the identification information of the scanning equipment;
a first sending module, configured to send a scanning request to a target device, where the scanning request carries the identifier information and a storage path, the scanning request is used to instruct the target device to send a scanning instruction to the scanning device, and the storage path is used to store a scanning result obtained by scanning by the scanning device.
10. The apparatus of claim 9, wherein the apparatus further comprises:
a first receiving module, configured to receive the scanning result sent by the target device;
and the second sending module is used for sending a deleting instruction for deleting the scanning result to the target equipment under the condition that the scanning result does not meet the preset requirement.
11. The apparatus of claim 9, wherein the first obtaining means comprises:
and the first acquisition unit is used for scanning the two-dimensional code containing the identification information of the scanning equipment and acquiring the identification information of the scanning equipment.
12. The apparatus of any of claims 9 to 11, wherein the apparatus further comprises:
the second acquisition module is used for acquiring the connection information of the target equipment;
and the first binding module is used for establishing a binding relationship with the target equipment based on the connection information.
13. The apparatus of claim 12, wherein the second obtaining means comprises:
a second obtaining unit, configured to obtain connection information of the target device by scanning a connection information two-dimensional code, where the connection information two-dimensional code is generated based on a transmission control protocol application on the target device.
14. A scanning device, comprising:
a second receiving module, configured to receive a scanning request sent by a mobile device, where the scanning request includes identification information of the scanning device and a storage path;
a third sending module, configured to send a scanning instruction to the scanning device in response to the scanning request;
and the third receiving module is used for receiving a scanning result obtained after the scanning device scans based on the scanning instruction, and storing the scanning result to the storage path.
15. The apparatus of claim 14, wherein the apparatus further comprises:
a fourth sending module, configured to send the scanning result to the mobile device;
and the fourth receiving module is used for receiving a deleting instruction sent by the mobile equipment under the condition that the mobile equipment determines that the scanning result does not meet the preset requirement, and deleting the scanning result according to the deleting instruction.
16. The apparatus of any one of claims 14 or 15, further comprising:
the first generation module is used for generating a connection information two-dimensional code based on the transmission control protocol application;
and the second binding module is used for establishing a binding relationship with the mobile equipment based on the connection information two-dimensional code.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1 to 8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 8.
CN202210601113.1A 2022-05-30 2022-05-30 Driving scene processing method and device and electronic equipment Pending CN114987494A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210601113.1A CN114987494A (en) 2022-05-30 2022-05-30 Driving scene processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210601113.1A CN114987494A (en) 2022-05-30 2022-05-30 Driving scene processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114987494A true CN114987494A (en) 2022-09-02

Family

ID=83031733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210601113.1A Pending CN114987494A (en) 2022-05-30 2022-05-30 Driving scene processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114987494A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024051248A1 (en) * 2022-09-09 2024-03-14 中国第一汽车股份有限公司 Marking method and apparatus for data of autonomous vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024051248A1 (en) * 2022-09-09 2024-03-14 中国第一汽车股份有限公司 Marking method and apparatus for data of autonomous vehicle

Similar Documents

Publication Publication Date Title
CN113240909B (en) Vehicle monitoring method, equipment, cloud control platform and vehicle road cooperative system
CN112579464A (en) Verification method, device and equipment of automatic driving algorithm and storage medium
CN114661574A (en) Method and device for acquiring sample deviation data and electronic equipment
US20230072632A1 (en) Obstacle detection method, electronic device and storage medium
CN112818792A (en) Lane line detection method, lane line detection device, electronic device, and computer storage medium
CN112559371A (en) Automatic driving test method and device and electronic equipment
CN114036253A (en) High-precision map data processing method and device, electronic equipment and medium
CN114987494A (en) Driving scene processing method and device and electronic equipment
CN114238790A (en) Method, apparatus, device and storage medium for determining maximum perception range
CN113722342A (en) High-precision map element change detection method, device and equipment and automatic driving vehicle
CN113391627A (en) Unmanned vehicle driving mode switching method and device, vehicle and cloud server
CN113761306A (en) Vehicle-end data processing method and device
CN115973190A (en) Decision-making method and device for automatically driving vehicle and electronic equipment
CN113610008B (en) Method, device, equipment and storage medium for acquiring state of slag car
CN114721692A (en) System, method and device for upgrading automatic driving model
CN114298772A (en) Information display method, device, equipment and storage medium
CN113450569A (en) Method, device, electronic equipment and storage medium for determining intersection state
US11772681B2 (en) Method and apparatus for processing autonomous driving simulation data, and electronic device
CN114093170B (en) Generation method, system and device of annunciator control scheme and electronic equipment
CN114596707B (en) Traffic control method, traffic control device, traffic control equipment, traffic control system and traffic control medium
CN115394103A (en) Method, device, equipment and storage medium for identifying signal lamp
CN115064007A (en) Abnormal road condition prompting method, device and system, Internet of vehicles and storage medium
CN114353853A (en) Method, apparatus and computer program product for determining detection accuracy
CN115649164A (en) Vehicle control device, autonomous vehicle, and vehicle control method
CN115107793A (en) Travel planning method, device, equipment and storage medium for automatic driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination