CN115649186A - Unmanned operation method and device based on manned driving, electronic equipment and storage medium - Google Patents

Unmanned operation method and device based on manned driving, electronic equipment and storage medium Download PDF

Info

Publication number
CN115649186A
CN115649186A CN202211670197.0A CN202211670197A CN115649186A CN 115649186 A CN115649186 A CN 115649186A CN 202211670197 A CN202211670197 A CN 202211670197A CN 115649186 A CN115649186 A CN 115649186A
Authority
CN
China
Prior art keywords
manned
vehicle
area
unmanned
unmanned vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211670197.0A
Other languages
Chinese (zh)
Other versions
CN115649186B (en
Inventor
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yikong Zhijia Technology Co Ltd
Original Assignee
Beijing Yikong Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yikong Zhijia Technology Co Ltd filed Critical Beijing Yikong Zhijia Technology Co Ltd
Priority to CN202211670197.0A priority Critical patent/CN115649186B/en
Publication of CN115649186A publication Critical patent/CN115649186A/en
Application granted granted Critical
Publication of CN115649186B publication Critical patent/CN115649186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present disclosure relates to a manned unmanned operation method, an apparatus, an electronic device and a storage medium, the method comprising: by forming a grouping of the unmanned vehicle and the manned vehicle, the unmanned vehicle acquires manned data provided by the manned vehicle within the grouping, and performs a corresponding unmanned task in the target area upon determining that the unmanned vehicle has traveled to the work area based on the job information. Therefore, the unmanned vehicle can work in the working area through the manned data acquired by the manned vehicle, so that the vehicle is ensured to keep the optimal decision in different environmental conditions and changing scenes by manpower; the unmanned vehicle adapts to a scene with real-time dynamic change by using the advantages of the unmanned vehicle, and the capability complementation between the unmanned vehicle and the manned vehicle is achieved, so that the environment and scene adaptability are improved.

Description

Unmanned operation method and device based on manned driving, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of unmanned driving technologies, and in particular, to a method and an apparatus for unmanned driving operation based on manned driving, an electronic device, and a storage medium.
Background
The unmanned system generally identifies the scene through various sensors and related algorithms. In order to improve the perception and recognition capability of the unmanned system to the scene, the processing capability of the sensor and the algorithm needs to be improved continuously. Such as increasing sensor types and numbers, improving sensor performance, and collecting enough scene data for deep learning training.
However, in unmanned areas such as mines, the situation of the terrain change in the loading area may not be exhaustive as the excavation process advances. If the scene recognition capability of the unmanned driving system is improved simply by improving the processing capability of the sensor and the algorithm, the cost is higher, and the efficiency is lower.
Disclosure of Invention
The disclosure provides a manned unmanned operation method, a manned unmanned operation device, electronic equipment and a storage medium.
According to a first aspect of the present disclosure, there is provided a manned unmanned work method, the method comprising:
forming a consist of the unmanned vehicle and the manned vehicle;
the method comprises the steps that the unmanned vehicle acquires manned data provided by the manned vehicles in a marshalling; wherein the manned driving data comprises: manually tagging work information when a user drives the manned vehicle to work in a target area, wherein the work information is associated with real-time positioning information of the manned vehicle when tagged;
determining whether the unmanned vehicle travels to a work area based on the work information;
and if so, executing corresponding unmanned operation in the operation area.
According to a second aspect of the present disclosure, there is provided a manned unmanned aerial vehicle based operation device, the device including:
a grouping module for forming a group of unmanned vehicles and manned vehicles;
the data acquisition module is used for acquiring manned data provided by manned vehicles in the marshalling by the unmanned vehicles; wherein the manned driving data comprises: manually tagging work information when a user drives the manned vehicle to work in a target area, wherein the work information is associated with real-time positioning information of the manned vehicle when tagged;
a judging module for judging whether the unmanned vehicle runs to a working area based on the working information;
and the operation module is used for executing corresponding unmanned operation in the operation area when the unmanned vehicle runs to the operation area.
According to a third aspect of the present disclosure, an electronic device is provided. The electronic device includes: a memory having a computer program stored thereon and a processor implementing the method as described above when executing the program.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-mentioned method of the present disclosure.
According to the manned-based unmanned operation method, the manned-based unmanned operation device, the electronic equipment and the storage medium, the unmanned vehicle acquires manned data provided by the manned vehicle in the marshalling through forming the marshalling of the unmanned vehicle and the manned vehicle, wherein the manned data comprises the following steps: and when the unmanned vehicle is judged to run to the working area based on the working information, corresponding unmanned working is executed in the working area. Therefore, the unmanned vehicle can work in the target area through the manned data acquired by the manned vehicle, so that the vehicle is ensured to keep the optimal decision in different environmental conditions and changing scenes by manpower; the unmanned vehicle adapts to a scene with real-time dynamic change by using the advantages of the unmanned vehicle, and the capability complementation between the unmanned vehicle and the manned vehicle is achieved, so that the environment and scene adaptability are improved.
Drawings
Further details, features and advantages of the disclosure are disclosed in the following description of exemplary embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a scenario provided by an exemplary embodiment of the present disclosure;
FIG. 2 is a flow chart of a manned unmanned-based task provided by an exemplary embodiment of the present disclosure;
FIG. 3 is a functional block schematic diagram of a manned unmanned based work device provided in an exemplary embodiment of the present disclosure;
fig. 4 is a block diagram of an electronic device according to an exemplary embodiment of the present disclosure;
fig. 5 is a block diagram of a computer system according to an exemplary embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description. It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
In the unmanned field such as mines, along with the advance of the excavation process, the terrain change conditions of a loading area cannot be exhausted, and in addition, in the scenes such as coal mining and the like, the laser radar cannot acquire the beam reflection of a coal mine and the like. The scene recognition capability of the unmanned system is improved simply by improving the processing capability of the sensor and the algorithm, and the efficiency is low.
Therefore, in order to avoid simply improving the recognition capability of the unmanned system to the scene by improving the processing capability of the sensor and the algorithm, the manned vehicle and the unmanned vehicle can form a marshalling, the manned vehicle carries out normal driving operation by a driver, the scene is automatically recognized, and the recognition result of the scene is shared to the unmanned vehicle through a cloud platform or a direct sending mode; in the embodiment, the manned vehicle can share the manned data to the cloud platform or the unmanned vehicle, the cloud platform or the unmanned vehicle is used for identifying scenes in the manned data, the running track and the operation area in the operation environment are obtained, and the unmanned vehicle can conveniently operate.
This may combine manual experience with unmanned systems so that unmanned vehicles may work in a target area based on manned data. The manned vehicle related to the embodiment of the disclosure may be a manned vehicle, a manual remote control driven vehicle, and the like.
In the embodiment provided by the disclosure, a manned vehicle and an unmanned vehicle can be formed into a group, the manned vehicle is driven by a driver to perform normal operation, and after one complete operation is completed, the driving track and the identified operation scene in the operation process are sequentially obtained, and the operation scene may be specifically an operation area.
In the embodiment provided by the present disclosure, a manned vehicle and an unmanned vehicle are grouped so that the manned vehicle and the unmanned vehicle have the same work environment and perform the same work content. The manned vehicle and the unmanned vehicle can be completely the same, and have the same sensor configuration and unmanned capacity; manned vehicles may not have unmanned capabilities, and some degradation is made with respect to unmanned vehicle sensor types and capabilities, but basic data acquisition and processing capabilities need to be provided according to business needs. And in the process of operating by someone driving the vehicle, the basic data collected by the vehicle can be recorded and stored in real time. The basic data collected by the manned vehicle can be used as manned data, a driving track and an operation area are obtained through processing and scene recognition of the manned data, and the data containing the driving track and the operation area are used as the manned data.
In an embodiment, the collected manned driving data is processed. For example, the manned vehicle may process the manned data when acquiring the manned data, and process the manned data into the manned data including a driving track and a working area, or the manned vehicle may send the acquired manned data to a cloud platform, and process the manned data into the manned data through the cloud platform; or the manned vehicle sends the collected manned data to the unmanned vehicle in a V2V (vehicle-to-vehicle communication) mode, and the manned data is processed into the manned data by the unmanned vehicle; or the manned vehicle sends the processed manned data to the cloud platform or the unmanned vehicle. When the data is sent specifically, the manned data can be sent after the manned vehicle completes one complete operation, and the manned data can be obtained after the manned vehicle completes multiple complete operations. In the embodiment, when the manned driving data is processed, the manned driving data can be processed in a screening, optimizing or integrating mode.
In a scene that the manned vehicle sends the collected manned data to the unmanned vehicle or the cloud platform in a real-time transmission mode, one manned vehicle and a plurality of unmanned vehicles can form a marshalling, the manned vehicle normally works in a target area, and the manned data is collected during the working. Unmanned vehicles within the consist operate in the target area based on manned data obtained from the manned data. For example, an unmanned vehicle acquires manned data provided by manned vehicles within a consist; wherein the manned data may include: the user manually annotates the work information while driving the human-driven vehicle to work in the target area, and the work information is associated with real-time positioning information of the human-driven vehicle when annotated. Therefore, the unmanned vehicle can judge whether the unmanned vehicle runs to the operation area or not based on the operation information, and if the unmanned vehicle runs to the operation area, the unmanned vehicle can perform operation according to the operation information manually marked when the unmanned data performs operation in the target area, namely the unmanned vehicle performs corresponding unmanned operation. The job information may include, among other things, the type of travel trajectory, the content of the job area, the boundary, or the range. The content of the operation area comprises a driving area, a queuing area, a loading position or a loading position.
In an exemplary embodiment, the target area may be a strip mine, which may subdivide a plurality of work areas, such as travel areas, queuing areas, loading areas, waiting or loading bays, etc., which may be referred to as different work areas. Therefore, when the unmanned vehicle runs to different operation areas in the target area, corresponding operation can be executed, for example, when the unmanned vehicle runs to a queuing area, queuing of the vehicle is carried out; when the unmanned vehicle travels to the loading area, the loading work and the like are performed.
In addition, the real-time positioning information can be acquired by a positioning sensor on the manned vehicle. In an embodiment, the loading position may be a loading stop point in fig. 1, and the loading position may be a loading stop point in fig. 1.
In an embodiment, the types of the driving trajectory may include a heavy-load trajectory and an empty-load trajectory, as shown in fig. 1, which is described by taking a loading process of an unmanned vehicle in a mine scene as an example. The manned vehicle can reach a loading stop point after entering the loading area, and the loading state of the manned vehicle is an idle state in the period, so that the driving track from the manned vehicle entering the loading area to the loading stop point is an idle track; after the manned vehicle finishes loading at the loading stop point, the driving track between the loading stop point and the outlet of the loading area is a heavy-load track.
In an exemplary embodiment provided by the present disclosure, a loading process of an unmanned vehicle in a mine scene is taken as an example for explanation. Generally, when an unmanned vehicle arrives at a loading area entrance, a waiting loading position needs to be determined, and a driving track going to the waiting loading position needs to be planned. And according to the planned driving track, the excavator goes to a waiting loading position, goes to a position for receiving the excavator requirement to go to the loading position to load, and is driven away from the loading position after the loading is finished. In the process, many links require the cognitive ability of the unmanned vehicle on the environment and the scene, for example, determining a waiting loading position, planning a driving track going to the waiting loading position, determining a driving track going to the loading position, planning a driving track going to the loading position and the like, so that safety problems and efficiency problems exist.
Safety problems, such as determination of a waiting loading position, planning of a driving track, proceeding to the waiting loading position, proceeding to the loading position and the like, all depend on basic maps and sensing capabilities, and basic safety cannot be guaranteed for situations where the identification capability of a laser radar is limited or completely lost in similar coal mining scenes.
Efficiency-related problems, such as determining a waiting loading position, planning a driving track and the like, whether the selection of the position and the track reasonably and directly affects the running efficiency of the vehicle, and the selection of the position and the track is strongly dependent on the accurate identification of the environment and the scene; however, as the work process advances, the environment and the scene of the loading area change continuously, and the environment and scene recognition algorithm fails.
Aiming at the safety problem and the efficiency problem, if the sensor capability is improved and the recognition capability of the environment and the scene is improved, the cost is greatly improved; if the operation boundary of the whole system is strictly controlled by limiting the use range of the unmanned vehicle, the service adaptability of the unmanned vehicle is reduced.
Therefore, in order to solve the above safety problem and efficiency problem, taking the loading process of the unmanned vehicles in the mine scene as an example, a plurality of unmanned vehicles and one manned vehicle can be combined into an operation marshalling, the manned vehicle can queue and wait in front of the position of the area to be loaded after entering the entrance of the operation scene in the operation process, and the manned vehicle can automatically acquire manned data in the driving process, including positioning data, and also including manual labeling and the like. The manned vehicle can go to the loading position according to the instruction of the excavator and stop at the loading position, and the manned vehicle can acquire the driving data and the stopping position in the driving process. After the excavator finishes loading, a person drives the vehicle to move away from the loading position, and the vehicle can automatically record the driving data of the moving away from the loading position. For example, the manned vehicle may process the collected manned driving into manned driving data and share the manned driving data with other unmanned vehicles, and the unmanned vehicle drives and stops based on the shared driving track and stopping position. Of course, the manned vehicle may also send the collected manned data to the cloud platform or the unmanned vehicle for processing, which is specifically referred to the above description and is not described herein again.
In an embodiment, the manned vehicle or the unmanned vehicle operates in a target area, for example the manned vehicle and the unmanned vehicle are transportation vehicles, and the target area may be a loading area, and the loading area includes a loading area entrance, an empty vehicle travel area, a queuing area, a waiting area, a loading area, a heavy vehicle travel area and a loading area exit. A vehicle driven by a person enters the loading area from the entrance of the loading area, the vehicle driven by the person is a no-load vehicle and can run in the no-load running area, the vehicle can be queued with other vehicles to wait for loading when reaching the queuing area, the vehicle can enter the to-be-loaded area after queuing is finished, the vehicle driven by the person can stop at the specific loading area according to the instruction of the excavator to load, the vehicle driven by the person is a heavy-load vehicle after loading is finished, and the vehicle driven by the person runs away from the running area of the heavy-load vehicle and runs out of the exit of the loading area. The manned data can be generated in the process that the manned vehicle works in the loading area, the unmanned vehicle can complete the work in the loading area according to the manned vehicle, the process is consistent with that of the manned vehicle when the manned vehicle works in the loading area, and the process is not repeated herein. Of course, when the unmanned vehicle operates based on the manned vehicle, the manned vehicle may be processed data, and the processing manner is as described above, and is not described herein again.
Specifically, as shown in fig. 1, fig. 1 is a schematic view of a scene provided in the embodiment of the present disclosure. In an embodiment, the unmanned vehicles and the manned vehicles may be of the type that transport vehicles transport ore, for example by transport vehicles in a mine setting.
In fig. 1, a person drives a vehicle to enter the entrance of the loading area, enter the queuing area after passing through the entrance area, then enter the waiting area, stop at the waiting stop point, and stop at the loading stop point of the loading area by receiving an instruction of the excavator. After the loading operation of the excavator is completed, a person drives the vehicle to leave the loading stop point. The method comprises the steps that when the manned vehicle enters the loading area from the entrance of the loading area and leaves the loading area, the manned vehicle can generate manned data, the positioning data of the manned vehicle can be included, manual marking of each stop point and each area can be further included, and the like. The driving tracks of the manned vehicle can be generated through the positioning data in the manned data, the driving tracks comprise the driving tracks from the entrance of the loading area to the loading stop point, the driving tracks from the loading stop point and the like, various working areas can be identified through the manned data, such as an entrance area, a queuing area, a waiting area, a loading area and the like, and therefore the driving tracks and the manned data of the working areas can be obtained, so that the unmanned vehicle can carry out work based on unmanned driving.
In the embodiment, when each operation area is identified, for example, a loading area may be defined between a stop point to be loaded and a loading stop point; a to-be-loaded area is arranged between the vehicle and the to-be-loaded stop point 30 meters before the vehicle finishes the reversing position before the to-be-loaded stop point; and a queuing area is arranged between the inlet of the loading area and the area to be loaded. The respective stopping points can be determined by manual calibration or by preset positions on the map.
Therefore, the unmanned vehicle can be executed according to the running track and the parking position of the unmanned vehicle, and in the embodiment, the unmanned vehicle can also finely adjust the running track and the parking position according to the self perception information so as to adapt to real-time environment change. In addition, the manned vehicle can also be driven in a remote control mode, and according to the service requirement, one manned vehicle can form a marshalling with a plurality of unmanned vehicles, so that the cost improvement brought by manual intervention is reduced to a great extent.
In the embodiment of the disclosure, after scene recognition is performed by completing one operation through one manned driving according to the characteristic that a mine scene changes slowly and continuously, the scene recognition result has certain accuracy within the time range of several operation cycles of the manned driving, so that one manned driving can actually support automatic operation of a plurality of marshalled unmanned vehicles, and therefore, in combination, cost improvement caused by increasing labor cost is avoided.
Based on the above embodiments, the disclosed embodiments first provide a manned unmanned operation method, as shown in fig. 2, the method may include the following steps:
in step S210, a consist of unmanned vehicles and manned vehicles is formed.
In the embodiment of the present disclosure, the manned vehicles having the same or similar work content as the unmanned vehicle may be grouped, or the manned vehicles of the same type as the unmanned vehicle may be grouped, so that the unmanned vehicle may directly perform work based on the manned vehicles generated by the manned vehicles when performing work in the target area.
In step S220, the unmanned vehicle acquires manned data provided by the manned vehicles within the consist.
In an embodiment, the unmanned vehicle may obtain the manned data through a cloud platform, a near field communication technology, or a short range communication technology. When the unmanned vehicle obtains the manned data through the cloud platform, the cloud platform can process the manned data, such as optimization, screening or integration, and sends the processed manned data to the unmanned vehicle, so that the unmanned vehicle can work in a target area better according to the manned data.
Wherein the manned driving data comprises: the user manually annotates the work information while driving the human-driven vehicle to work in the target area, and the work information is associated with real-time positioning information of the human-driven vehicle while being annotated. The job information in the embodiment includes the type of the travel trajectory, the content, the boundary, or the range of the job region. Wherein the unmanned vehicle may be used for transportation, and the contents of the work area may include at least one of a driving area, a queuing area, a waiting area, a loading area, a waiting position, and a loading position. The real-time positioning information may be obtained by a positioning sensor on the manned vehicle.
In embodiments provided by the present disclosure, a driverless vehicle may be identical to a driverless vehicle, having the same sensor configuration and driverless capability; manned vehicles may also not have unmanned capabilities, somewhat degraded with respect to unmanned vehicle sensor types and capabilities, but need to have basic data acquisition and processing capabilities as required by the business. And in the process of operating by someone driving the vehicle, the basic data collected by the vehicle can be recorded and stored in real time. The basic data comprise high-precision positioning data and the like, and the basic data collected by the manned vehicle and the received manually marked information of the user can be used as the manned data. For example, manual labeling may be labeling via a button on a display device.
In the embodiment, the driving track can be obtained through positioning data in the manned data, and the working area is obtained through scene recognition. As can be seen from fig. 1 and the corresponding embodiments described above, the travel path may be a travel path from the loading area entrance to the loading stop point and a travel path from the loading stop point. The travel tracks can be further divided according to the stop points, for example, the stop points can include a loading area entrance, a to-be-loaded stop point and a loading stop point, and the travel tracks can include a travel track from the loading area entrance to the to-be-loaded stop point, a travel track from the to-be-loaded stop point to the loading stop point, a travel track from the loading stop point, and the like. The identification of the job scenario in the target area may include a driving area, a queuing area, a loading location, or a loading location. The areas and locations in embodiments may be obtained by manual labeling by the user while the driver is driving the vehicle to work in the target area.
In an embodiment, an area between the loading area inlet and the to-be-loaded stop point can be used as a queuing area; taking an area between the stop point to be loaded and the loading stop point as an area to be loaded; and taking the area where the loading stop point is positioned as a loading area.
In the embodiment provided by the present disclosure, the work area in the target area may also be determined according to the manually labeled information. For example, the region label information in the manned data is acquired, and the work region in the target region is determined based on the region label information.
In the embodiment of the disclosure, the unmanned vehicle can complete the operation having the same operation content as the manned vehicle in the target area based on the obtained travel track and the operation area without planning the travel track of the operation and re-identifying the scene.
In step S230, it is determined whether the unmanned vehicle travels to the work area based on the work information.
When the unmanned vehicle travels to the work area, the corresponding unmanned work is performed in the work area in step S240.
The manned data comprises manually marked operation information when the user drives the manned vehicle to operate in the target area, and the operation information is associated with real-time positioning information of the manned vehicle when marked. In the embodiment, the target area can be further divided into areas such as a driving area, a queuing area, a loading position or a loading position according to the content of the working area, so that the vehicle needs to execute corresponding working content when entering different working areas.
Therefore, it is necessary to determine whether the unmanned vehicle has traveled to the work area, and when traveling to the work area, the unmanned vehicle can perform the corresponding unmanned work in the work area based on the manned data. For example, an unmanned vehicle is queued with other vehicles while traveling to a queuing area; waiting for loading when the unmanned vehicle runs to the area to be loaded; when the unmanned vehicle travels to the loading area, the loading work is performed.
In an embodiment, since the unmanned vehicle and the manned vehicle form a group, when the unmanned vehicle travels to a corresponding work area, the content of the work in the work area by the manned vehicle and the unmanned vehicle may be the same, and the unmanned vehicle may travel according to the travel trajectory in the manned data and perform the unmanned work in the work area. For example, when an unmanned vehicle enters a queuing area, if other vehicles are detected in the queuing area, the unmanned vehicle queues in the queuing area; when entering the region to be loaded, waiting for receiving the command of the excavator, when receiving the command of the large excavator, driving to the corresponding loading stop point to stop, and driving away from the loading stop point after the loading operation is finished.
In the embodiment, the manned data is obtained based on data acquired by the manned vehicle when the manned vehicle works in the target area and manually marked information, so that when the unmanned vehicle works in the working area based on the manned data, the unmanned vehicle can be data obtained after the manned vehicle completes one complete operation in the target area, one or more unmanned vehicles can work in the working area in the target area according to the manned data, and the validity of the manned data can be verified in the process of working according to the manned data, so that the manned data can be perfectly or dynamically adjusted, and other vehicles can better complete the operation according to the perfectly or dynamically adjusted unmanned vehicle.
According to the unmanned operation method based on manned driving, the grouping of the unmanned vehicle and the manned vehicle is formed, and the unmanned vehicle acquires manned data provided by the manned vehicle in the grouping, wherein the manned data comprises: and when the unmanned vehicle is judged to run to the working area based on the working information, corresponding unmanned working is executed in the working area. Therefore, the unmanned vehicle can work in the target area through the manned data acquired by the manned vehicle, so that the vehicle is ensured to keep the optimal decision in different environmental conditions and scenes slowly changing along with time by manpower; the unmanned vehicle adapts to a scene with real-time dynamic change by using the advantages of the unmanned vehicle, and the capability complementation between the unmanned vehicle and the manned vehicle is achieved, so that the environment and scene adaptability are improved.
Based on the foregoing embodiment, in another embodiment provided by the present disclosure, the foregoing step S240 may further include the following steps:
in step S241, information to be worked of the unmanned vehicle is acquired.
In step S242, a target travel track and a target work area are acquired. Wherein the target driving track and the target operation area are respectively determined from the driving track and the operation area based on the information to be operated.
In step S243, the vehicle travels along the target travel trajectory and performs work when entering the target area.
In the embodiment provided by the present disclosure, the manned data may be data obtained when one manned vehicle works in the target area, or may be data obtained when a plurality of manned vehicles work in the target area.
For the condition of manned data obtained when one manned vehicle works in the target area, if the manned vehicle and the unmanned vehicle have the same work content and work in the target area, the unmanned vehicle can directly perform corresponding work according to the driving track and the work area obtained by the manned data. If the manned vehicle works in the target area, more work contents are included, and if the unmanned vehicle works in the target area, only one or more work contents of the manned vehicle are included, the driving track and the working area are obtained from the manned data, so that the target driving track and the target working area which are matched with the unmanned vehicle work information need to be respectively determined from the driving track and the working area, and the unmanned vehicle can drive according to the target driving track and work when entering the target working area.
In the case of manned data obtained when a plurality of manned vehicles work in a target area, the work content of the manned data may also include more work content, while the unmanned vehicle may only include one or more of the manned vehicle work content when working in the work area, and since the travel track and the work area are obtained from the manned data, it is necessary to determine a target travel track and a target work area matching the unmanned vehicle work information from the travel track and the work area, respectively, so that the unmanned vehicle can travel according to the target travel track and work when entering the target work area.
By determining the target running track and the target operation area, the unmanned vehicle can be guaranteed to run better according to the target running track aiming at operation information, and operation is carried out when the unmanned vehicle enters the target operation area.
It should be noted that the method provided by the embodiment of the present disclosure may be applied to an unmanned vehicle, and may also be applied to a device such as a console for controlling the unmanned vehicle, and the embodiment of the present disclosure is not limited thereto.
In the case of dividing each function module according to each function, the present disclosure provides a manned unmanned aerial vehicle, which may be a server or a chip applied to the server. Fig. 3 is a functional block diagram schematically illustrating a manned unmanned work device according to an exemplary embodiment of the present disclosure. As shown in fig. 3, the manned unmanned working device includes:
a grouping module 10 for forming a group of unmanned vehicles and manned vehicles;
the data acquisition module 20 is used for acquiring manned data provided by manned vehicles in the marshalling by the unmanned vehicles; wherein the manned driving data comprises: manually labeling operation information when a user drives the manned vehicle to operate in a target area, wherein the operation information is associated with real-time positioning information of the manned vehicle during labeling;
a determination module 30 for determining whether the unmanned vehicle travels to a work area based on the work information;
and the operation module 40 is used for executing corresponding unmanned operation in the working area when the unmanned vehicle runs to the working area.
In yet another embodiment provided by the present disclosure, the job information includes a type of a travel track, a content, a boundary, or a range of a job region.
In yet another embodiment provided by the present disclosure, the unmanned vehicle is for transportation, and the contents of the work area include a travel area, a queuing area, a loading area, a waiting position, or a loading position.
In yet another embodiment provided by the present disclosure, the manual labeling is labeling via a button on the display device.
In yet another embodiment provided by the present disclosure, the real-time location information is obtained by a location sensor on the manned vehicle.
In yet another embodiment provided by the present disclosure, the unmanned vehicle obtains the manned data through a cloud platform, near field communication technology, or short range communication technology.
In yet another embodiment provided by the present disclosure, the data acquired by the unmanned vehicle from the cloud platform is processed by the cloud platform.
In yet another embodiment provided by the present disclosure, the processing of the cloud platform includes optimizing, screening, or integrating the manned driving data.
Since the device embodiment corresponds to the method embodiment, reference may be specifically made to the description of the method embodiment, and details are not described here.
According to the unmanned operation device based on manned driving, the grouping of the unmanned vehicle and the manned vehicle is formed, and the unmanned vehicle acquires manned data provided by the manned vehicle in the grouping, wherein the manned data comprises: and when the unmanned vehicle is judged to run to the working area based on the working information, corresponding unmanned working is executed in the working area. Therefore, the unmanned vehicle can work in the target area through the manned data acquired by the manned vehicle, so that the vehicle is ensured to keep the optimal decision in different environmental conditions and in a changing scene manually; the unmanned vehicle adapts to a scene with real-time dynamic change by using the advantages of the unmanned vehicle, and the capability complementation between the unmanned vehicle and the manned vehicle is achieved, so that the environment and scene adaptability are improved.
An embodiment of the present disclosure further provides an electronic device, including: at least one processor; a memory for storing the at least one processor-executable instruction; wherein the at least one processor is configured to execute the instructions to implement the above-mentioned methods disclosed by the embodiments of the present disclosure.
Fig. 4 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure. As shown in fig. 4, the electronic device 1800 includes at least one processor 1801 and a memory 1802 coupled to the processor 1801, wherein the processor 1801 may perform corresponding steps of the above methods disclosed in the embodiments of the present disclosure.
The processor 1801 may also be referred to as a Central Processing Unit (CPU), which may be an integrated circuit chip having signal processing capability. The steps of the above method disclosed in the embodiment of the present disclosure may be implemented by integrated logic circuits of hardware in the processor 1801 or instructions in the form of software. The processor 1801 may be a general purpose processor, a Digital Signal Processor (DSP), an ASIC, an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present disclosure may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. Software modules may reside in memory 1802 such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, or other storage medium known in the art. The processor 1801 reads the information in the memory 1802 and, in conjunction with its hardware, performs the steps of the above-described method.
In addition, in the case where various operations/processes according to the present disclosure are implemented by software and/or firmware, a program constituting the software may be installed from a storage medium or a network to a computer system having a dedicated hardware structure, for example, the computer system 1900 shown in fig. 5, which is capable of executing various functions including functions such as those described above, etc., when the various programs are installed. Fig. 5 is a block diagram of a computer system according to an exemplary embodiment of the present disclosure.
Computer system 1900 is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the computer system 1900 includes a computing unit 1901, and the computing unit 1901 can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1902 or a computer program loaded from a storage unit 1908 into a Random Access Memory (RAM) 1903. In the RAM 1903, various programs and data required for the operation of the computer system 1900 can also be stored. The calculation unit 1901, ROM 1902, and RAM 1903 are connected to each other via a bus 1904. An input/output (I/O) interface 1905 is also connected to bus 1904.
A number of components in computer system 1900 are connected to I/O interface 1905, including: an input unit 1906, an output unit 1907, a storage unit 1908, and a communication unit 1909. The input unit 1906 may be any type of device capable of inputting information to the computer system 1900, and the input unit 1906 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. Output unit 1907 can be any type of device capable of presenting information and can include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Storage unit 1908 can include, but is not limited to, a magnetic disk, an optical disk. The communication unit 1909 allows the computer system 1900 to exchange information/data with other devices via a network, such as the Internet, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication transceiver, and/or a chipset, such as a Bluetooth (TM) device, a WiFi device, a WiMax device, a cellular communication device, and/or the like.
The computing unit 1901 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computation unit 1901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computation chips, various computation units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 1901 performs the respective methods and processes described above. For example, in some embodiments, the above-described methods disclosed by embodiments of the present disclosure may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1908. In some embodiments, part or all of the computer program can be loaded and/or installed onto the electronic device 1900 via the ROM 1902 and/or the communication unit 1909. In some embodiments, the computing unit 1901 may be configured by any other suitable means (e.g., by means of firmware) to perform the above-described methods disclosed by the embodiments of the present disclosure.
The disclosed embodiments also provide a computer-readable storage medium, wherein when the instructions in the computer-readable storage medium are executed by a processor of an electronic device, the electronic device is enabled to perform the above method disclosed by the disclosed embodiments.
A computer readable storage medium in embodiments of the disclosure may be a tangible medium that may contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specifically, the computer-readable storage medium may include one or more wire-based electrical connections, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The embodiments of the present disclosure also provide a computer program product, which includes a computer program, wherein the computer program, when executed by a processor, implements the above method disclosed by the embodiments of the present disclosure.
In embodiments of the present disclosure, computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules, components or units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware. Wherein the designation of a module, component or unit does not in some way constitute a limitation on the module, component or unit itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of some embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (11)

1. A manned unmanned operation method is characterized by comprising the following steps:
forming a consist of the unmanned vehicle and the manned vehicle;
the method comprises the steps that the unmanned vehicle acquires manned data provided by the manned vehicles in a marshalling; wherein the manned driving data comprises: manually tagging work information when a user drives the manned vehicle to work in a target area, wherein the work information is associated with real-time positioning information of the manned vehicle when tagged;
determining whether the unmanned vehicle travels to a work area based on the work information;
and if so, executing corresponding unmanned operation in the operation area.
2. The method of claim 1, wherein the job information includes a type of travel trajectory, content of a job area, a boundary, or a range.
3. The method of claim 2, wherein the unmanned vehicle is used for transportation, and the content of the work area comprises a travel area, a queuing area, a waiting area, a loading area, a waiting position, or a loading position.
4. The method of claim 3, wherein the manual labeling is by a button on a display device.
5. The method of claim 1, wherein the real-time location information is obtained by a location sensor on the manned vehicle.
6. The method of claim 1~5 wherein said unmanned vehicle obtains said manned data via a cloud platform, near field communication technology, or short range communication technology.
7. The method of claim 6, wherein the data acquired by the unmanned vehicle from the cloud platform is processed by the cloud platform.
8. The method of claim 6, wherein the processing of the cloud platform comprises optimizing, screening, or integrating the manned driving data.
9. An unmanned working device based on manned driving, the device comprising:
a grouping module for forming a group of unmanned vehicles and manned vehicles;
the data acquisition module is used for acquiring manned data provided by the manned vehicles in the marshalling by the unmanned vehicles; wherein the manned driving data comprises: manually tagging work information when a user drives the manned vehicle to work in a target area, wherein the work information is associated with real-time positioning information of the manned vehicle when tagged;
a judging module for judging whether the unmanned vehicle runs to a working area based on the working information;
and the operation module is used for executing corresponding unmanned operation in the operation area when the unmanned vehicle runs to the operation area.
10. An electronic device, comprising:
at least one processor;
a memory for storing the at least one processor-executable instruction;
wherein the at least one processor is configured to execute the instructions to implement the method of any of claims 1-8.
11. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-8.
CN202211670197.0A 2022-12-26 2022-12-26 Unmanned operation method and device based on manned operation, electronic equipment and storage medium Active CN115649186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211670197.0A CN115649186B (en) 2022-12-26 2022-12-26 Unmanned operation method and device based on manned operation, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211670197.0A CN115649186B (en) 2022-12-26 2022-12-26 Unmanned operation method and device based on manned operation, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115649186A true CN115649186A (en) 2023-01-31
CN115649186B CN115649186B (en) 2023-11-07

Family

ID=85023259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211670197.0A Active CN115649186B (en) 2022-12-26 2022-12-26 Unmanned operation method and device based on manned operation, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115649186B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116151596A (en) * 2023-04-20 2023-05-23 北京路凯智行科技有限公司 Mixed grouping scheduling method for open-pit mining area

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015173073A1 (en) * 2014-05-15 2015-11-19 Cnh Industrial Belgium Nv Harvesting method using unmanned agricultural work vehicles
CA2941194A1 (en) * 2014-09-30 2016-04-07 Hitachi Construction Machinery Co., Ltd. Driving assistance system, vehicle, driving assistance terminal device, and driving assistance program
WO2016129671A1 (en) * 2015-02-13 2016-08-18 ヤンマー株式会社 Control system for autonomously traveling work vehicle
JP2019114138A (en) * 2017-12-25 2019-07-11 井関農機株式会社 Farm work supporting system
CN110519703A (en) * 2019-08-28 2019-11-29 北京易控智驾科技有限公司 A kind of mine car Unmanned Systems
JP2020162617A (en) * 2018-02-28 2020-10-08 株式会社クボタ Work vehicle
CN112455440A (en) * 2020-11-30 2021-03-09 北京易控智驾科技有限公司 Collaborative avoidance method, device, equipment and medium for automatically driving vehicle marshalling
CN112613672A (en) * 2020-12-28 2021-04-06 北京易控智驾科技有限公司 Vehicle dispatching method and device for strip mine, storage medium and electronic equipment
WO2022257767A1 (en) * 2021-06-11 2022-12-15 华能伊敏煤电有限责任公司 Method for automatically controlling path of mining area transport truck

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015173073A1 (en) * 2014-05-15 2015-11-19 Cnh Industrial Belgium Nv Harvesting method using unmanned agricultural work vehicles
US20170055455A1 (en) * 2014-05-15 2017-03-02 Cnh Industrial America Llc Harvesting method using unmanned agricultural work vehicles
CA2941194A1 (en) * 2014-09-30 2016-04-07 Hitachi Construction Machinery Co., Ltd. Driving assistance system, vehicle, driving assistance terminal device, and driving assistance program
US20170068249A1 (en) * 2014-09-30 2017-03-09 Hitachi Construction Machinery Co., Ltd. Driving assistance system, vehicle, driving assistance terminal device, and driving assistance program
WO2016129671A1 (en) * 2015-02-13 2016-08-18 ヤンマー株式会社 Control system for autonomously traveling work vehicle
JP2019114138A (en) * 2017-12-25 2019-07-11 井関農機株式会社 Farm work supporting system
JP2020162617A (en) * 2018-02-28 2020-10-08 株式会社クボタ Work vehicle
CN110519703A (en) * 2019-08-28 2019-11-29 北京易控智驾科技有限公司 A kind of mine car Unmanned Systems
CN112455440A (en) * 2020-11-30 2021-03-09 北京易控智驾科技有限公司 Collaborative avoidance method, device, equipment and medium for automatically driving vehicle marshalling
CN112613672A (en) * 2020-12-28 2021-04-06 北京易控智驾科技有限公司 Vehicle dispatching method and device for strip mine, storage medium and electronic equipment
WO2022257767A1 (en) * 2021-06-11 2022-12-15 华能伊敏煤电有限责任公司 Method for automatically controlling path of mining area transport truck

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116151596A (en) * 2023-04-20 2023-05-23 北京路凯智行科技有限公司 Mixed grouping scheduling method for open-pit mining area

Also Published As

Publication number Publication date
CN115649186B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
US11874671B2 (en) Performing tasks using autonomous machines
US20200331476A1 (en) Automatic lane change with minimum gap distance
JP7281489B2 (en) Autonomous operation method, device, program and storage medium for self-driving logistics vehicle
US20200209874A1 (en) Combined virtual and real environment for autonomous vehicle planning and control testing
US10139832B2 (en) Computer-assisted or autonomous driving with region-of-interest determination for traffic light analysis
JP2018503169A (en) Autonomous vehicle that detects and responds to concession scenarios
JP7060398B2 (en) Server device
US20210035443A1 (en) Navigation analysis for a multi-lane roadway
CN115649186B (en) Unmanned operation method and device based on manned operation, electronic equipment and storage medium
JP2019192234A (en) Tracking objects with multiple cues
CN113535743A (en) Real-time updating method and device for unmanned map, electronic equipment and storage medium
CN115686028B (en) Unmanned operation method and device based on manned operation, electronic equipment and storage medium
AU2021334408A1 (en) Method and apparatus for coordinating multiple cooperative vehicle trajectories on shared road networks
CN109945880B (en) Path planning method, related equipment and readable storage medium
CN111861008A (en) Unmanned vehicle and path planning method, device and readable storage medium thereof
EP4169808A1 (en) Automated valet parking system, control method for automated valet parking system, and non-transitory storage medium
US20230060383A1 (en) System and method of off-board-centric autonomous driving computation
CN115675493B (en) Unmanned method and device using manual driving track layer information
CN114485670A (en) Path planning method and device for mobile unit, electronic equipment and medium
CN105144663A (en) System and method for data collection and analysis using a multi-level network
CN113793518A (en) Vehicle passing processing method and device, electronic equipment and storage medium
CN113815644A (en) System and method for reducing uncertainty in estimating autonomous vehicle dynamics
CN115657692A (en) Unmanned operation method and device based on manned driving, electronic equipment and storage medium
CN115686029B (en) Unmanned operation method and device based on manned operation, electronic equipment and storage medium
US20230366683A1 (en) Information processing device and information processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant