CN112477886A - Method and device for controlling unmanned vehicle, electronic device and storage medium - Google Patents

Method and device for controlling unmanned vehicle, electronic device and storage medium Download PDF

Info

Publication number
CN112477886A
CN112477886A CN202011410339.0A CN202011410339A CN112477886A CN 112477886 A CN112477886 A CN 112477886A CN 202011410339 A CN202011410339 A CN 202011410339A CN 112477886 A CN112477886 A CN 112477886A
Authority
CN
China
Prior art keywords
target object
candidate
candidate target
unmanned vehicle
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011410339.0A
Other languages
Chinese (zh)
Other versions
CN112477886B (en
Inventor
王小刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Leading Technology Co Ltd
Original Assignee
Nanjing Leading Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Leading Technology Co Ltd filed Critical Nanjing Leading Technology Co Ltd
Priority to CN202011410339.0A priority Critical patent/CN112477886B/en
Publication of CN112477886A publication Critical patent/CN112477886A/en
Application granted granted Critical
Publication of CN112477886B publication Critical patent/CN112477886B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0025Planning or execution of driving tasks specially adapted for specific operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to the technical field of unmanned driving, and discloses a control method, a device, electronic equipment and a storage medium of an unmanned vehicle.

Description

Method and device for controlling unmanned vehicle, electronic device and storage medium
Technical Field
The present disclosure relates to the field of unmanned vehicles, and more particularly, to a method and an apparatus for controlling an unmanned vehicle, an electronic device, and a storage medium.
Background
Along with the development of science and technology, intelligent unmanned vehicles are gradually known, and the intelligent unmanned vehicles can replace the work of drivers because the drivers are not required to drive the vehicles and the functions of automatic departure, automatic running, automatic parking and the like can be realized. And after the driver drives the vehicle and works for a period of time, the driver needs to have a rest to ensure safe driving, and if the driver drives in a fatigue way, the problems of traffic accidents and the like can occur, so that the unmanned vehicle has certain advantages. How to further improve the utilization rate of the unmanned vehicle is always worth researching.
Disclosure of Invention
The embodiment of the application provides a control method and device of an unmanned vehicle, electronic equipment and a storage medium, and aims to solve the problem of how to improve the utilization rate of the unmanned vehicle.
In one aspect, an embodiment of the present application provides a control method for an unmanned vehicle, including:
acquiring an image of an environment surrounding the unmanned vehicle;
analyzing a target object with a car using intention from the image;
controlling the unmanned vehicle to take over the target object according to the position information of the target object;
after the pickup is successful, interacting with the target object to determine the destination of the target object and a driving route to the destination;
controlling the unmanned vehicle to travel to the destination according to the travel route.
In an embodiment of the present application, the unmanned vehicle includes a forward looking camera and a look around camera, acquiring an image of an environment around the unmanned vehicle includes:
and respectively acquiring images collected by the forward-looking camera and the all-round looking camera.
In an embodiment of the application, the analyzing the target object with the car using intention from the image includes:
extracting a first candidate target object with a specified vehicle-using posture in an image acquired by the forward-looking camera by adopting a pre-trained first target extraction model to obtain a first candidate target object set; extracting a second candidate target object with the appointed vehicle using posture in the image acquired by the all-round-looking camera by adopting a pre-trained second target extraction model to obtain a second candidate target object set;
screening the same candidate target object in the first candidate target object set and the second candidate target object set, and filtering the same candidate target object from the second candidate target object set;
and screening out candidate target objects which meet a preset position relation with the unmanned vehicle as the target objects with the vehicle using intention based on the first candidate target object set and the screened second candidate target object set.
In an embodiment of the present application, the screening for the same candidate target object in the first candidate target object set and the second candidate target object set includes:
determining position information of each first candidate target object in the first candidate target object set respectively; and determining the position information of each second candidate target object in the second candidate target object set respectively;
comparing the position information of each first candidate target object in the first candidate target object set with the position information of each second candidate target object in the second candidate target object set one by one;
and taking the first candidate target object and the second candidate target object with the position difference within a preset distance range as the same candidate target object.
In an embodiment of the application, the designated vehicle-using posture is facing the unmanned vehicle and has a designated gesture.
In an embodiment of the application, when the images acquired by the forward-looking camera and the look-around camera are both video frames, the number of the video frames in which each of the first candidate target object and the second candidate target object is located is at least two.
In an embodiment of the application, the preset position relationship is a shortest distance.
In an embodiment of the application, the first target extraction model is trained according to the following method:
obtaining a first training sample, wherein the first training sample comprises a first sample image and a specified vehicle using posture related to the first sample image;
inputting the first sample image into the first target extraction model to obtain a predicted posture of a target object in the first sample image output by the first target extraction model;
training parameters of the first target extraction model based on a loss between the predicted pose of the target object and the designated car pose of the first sample image.
In an embodiment of the application, the second target extraction model is trained according to the following method:
acquiring a second training sample, wherein the second training sample comprises a second sample image and a specified vehicle using posture related to the second sample image;
inputting the second sample image into the second target extraction model to obtain a predicted posture of a target object in the second sample image output by the second target extraction model;
training parameters of the second target extraction model based on a loss between the predicted pose of the target object and the designated car pose of the second sample image.
In an embodiment of the application, the controlling the unmanned vehicle to pick up the target object according to the position information of the target object includes:
controlling the unmanned vehicle to travel to a specified range around the position information of the target object;
when the unmanned vehicle is detected to be located in a specified range around the position information of the target object, determining whether the target object takes the vehicle or not through voice interaction;
and if the target object is determined to take the vehicle, indicating the target object to get on the vehicle.
In an embodiment of the application, after the pickup is successful, interacting with the target object to determine a destination of the target object and a driving route to the destination, includes:
receiving and recognizing voice information of the target object, and determining a destination of the target object;
displaying the destination of the target object through a display screen built in the unmanned vehicle, and planning at least one candidate route after receiving a destination confirmation instruction of the target object;
a candidate route is selected as the travel route.
In an embodiment of the present application, the selecting a candidate driving route as the driving route includes:
in response to an indication of selection of the at least one candidate route by the target object, treating the selected candidate route as the driving route.
In an embodiment of the application, after controlling the unmanned vehicle to travel to the destination according to the travel route, the method further includes:
requesting the target object to pay an order;
and after the payment of the target object is confirmed to be completed, prompting the target object to evaluate the service.
In one aspect, an embodiment of the present application provides a control apparatus for an unmanned vehicle, including:
an acquisition module for acquiring an image of an environment surrounding the unmanned vehicle;
the analysis module is used for analyzing a target object with the car using intention from the image;
the first control module is used for controlling the unmanned vehicle to carry out pickup drive on the target object according to the position information of the target object;
the determining module is used for interacting with the target object to determine the destination of the target object and a driving route to the destination after the pickup is successful;
and the second control module is used for controlling the unmanned vehicle to travel to the destination according to the travel route.
In an embodiment of the present application, the unmanned vehicle includes a forward looking camera and a look around camera, the obtaining module is configured to:
and respectively acquiring images collected by the forward-looking camera and the all-round looking camera.
In an embodiment of the application, the analysis module is configured to:
extracting a first candidate target object with a specified vehicle-using posture in an image acquired by the forward-looking camera by adopting a pre-trained first target extraction model to obtain a first candidate target object set; extracting a second candidate target object with the appointed vehicle using posture in the image acquired by the all-round-looking camera by adopting a pre-trained second target extraction model to obtain a second candidate target object set;
screening the same candidate target object in the first candidate target object set and the second candidate target object set, and filtering the same candidate target object from the second candidate target object set;
and screening out candidate target objects which meet a preset position relation with the unmanned vehicle as the target objects with the vehicle using intention based on the first candidate target object set and the screened second candidate target object set.
In an embodiment of the application, the analysis module is further configured to:
determining position information of each first candidate target object in the first candidate target object set respectively; and determining the position information of each second candidate target object in the second candidate target object set respectively;
comparing the position information of each first candidate target object in the first candidate target object set with the position information of each second candidate target object in the second candidate target object set one by one;
and taking the first candidate target object and the second candidate target object with the position difference within a preset distance range as the same candidate target object.
In an embodiment of the application, the designated vehicle-using posture is facing the unmanned vehicle and has a designated gesture.
In an embodiment of the application, when the images acquired by the forward-looking camera and the look-around camera are both video frames, the number of the video frames in which each of the first candidate target object and the second candidate target object is located is at least two.
In an embodiment of the application, the preset position relationship is a shortest distance.
In an embodiment of the present application, the apparatus further includes:
a first training module to train the first target extraction model according to the following method:
obtaining a first training sample, wherein the first training sample comprises a first sample image and a specified vehicle using posture related to the first sample image;
inputting the first sample image into the first target extraction model to obtain a predicted posture of a target object in the first sample image output by the first target extraction model;
training parameters of the first target extraction model based on a loss between the predicted pose of the target object and the designated car pose of the first sample image.
In an embodiment of the present application, the apparatus further includes:
a second training module for training the second target extraction model according to the following method:
acquiring a second training sample, wherein the second training sample comprises a second sample image and a specified vehicle using posture related to the second sample image;
inputting the second sample image into the second target extraction model to obtain a predicted posture of a target object in the second sample image output by the second target extraction model;
training parameters of the second target extraction model based on a loss between the predicted pose of the target object and the designated car pose of the second sample image.
In an embodiment of the present application, the first control module is configured to:
controlling the unmanned vehicle to travel to a specified range around the position information of the target object;
when the unmanned vehicle is detected to be located in a specified range around the position information of the target object, determining whether the target object takes the vehicle or not through voice interaction;
and if the target object is determined to take the vehicle, indicating the target object to get on the vehicle.
In an embodiment of the application, the determining module is configured to:
receiving and recognizing voice information of the target object, and determining a destination of the target object;
displaying the destination of the target object through a display screen built in the unmanned vehicle, and planning at least one candidate route after receiving a destination confirmation instruction of the target object;
a candidate route is selected as the travel route.
In an embodiment of the application, the determining module is further configured to:
in response to an indication of selection of the at least one candidate route by the target object, treating the selected candidate route as the driving route.
In an embodiment of the application, the second control module is further configured to:
requesting the target object to pay an order;
and after the payment of the target object is confirmed to be completed, prompting the target object to evaluate the service.
In one aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of any one of the methods when executing the computer program.
In one aspect, an embodiment of the present application provides a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, implement the steps of any of the above-described methods.
In one aspect, an embodiment of the present application provides a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in any of the various alternative implementations of control of TCP transmission performance described above.
According to the control method of the unmanned vehicle, the image of the surrounding environment of the unmanned vehicle is obtained, the target object with the vehicle using intention is analyzed from the image, then the unmanned vehicle is controlled to carry out driving receiving on the target object according to the position information of the target object, after the driving receiving is successful, the target object is interacted with the target object to determine the destination of the target object and the driving route to the destination, and finally the unmanned vehicle is controlled to drive to the destination according to the driving route, so that the utilization rate of the unmanned vehicle can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic application scenario diagram of a control method of an unmanned vehicle according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating a method for controlling an unmanned vehicle according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for screening a target object from an image according to an embodiment of the present application;
FIG. 4(a) is a schematic diagram of an annotated image according to an embodiment of the present application;
FIG. 4(b) is a schematic diagram of an orientation of a pedestrian facing an unmanned vehicle according to an embodiment of the present application;
fig. 5(a) is a schematic diagram of an application scenario illustrating candidate target objects according to an embodiment of the present application;
fig. 5(b) is a schematic diagram of an application scenario illustrating candidate target objects according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a control device of an unmanned vehicle according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Hereinafter, some terms in the embodiments of the present application are explained to facilitate understanding by those skilled in the art.
(1) In the embodiments of the present application, the term "plurality" means two or more, and other terms are similar thereto.
(2) "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
(3) A server serving the terminal, the contents of the service such as providing resources to the terminal, storing terminal data; the server is corresponding to the application program installed on the terminal and is matched with the application program on the terminal to run.
(4) The terminal may refer to an APP (Application) of a software class or a client. The system is provided with a visual display interface and can interact with a user; is corresponding to the server, and provides local service for the client. For software applications, except some applications that are only run locally, the software applications are generally installed on a common client terminal and need to be run in cooperation with a server terminal. After the internet has developed, more common applications include e-mail clients for e-mail receiving and sending, and instant messaging clients. For such applications, a corresponding server and a corresponding service program are required in the network to provide corresponding services, such as database services, configuration parameter services, and the like, so that a specific communication connection needs to be established between the client terminal and the server terminal to ensure the normal operation of the application program.
Any number of elements in the drawings are by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
In the specific practice process, after a driver drives a vehicle to work for a period of time, the driver needs to have a rest to ensure safe driving, if the driver drives the vehicle in a fatigue mode, problems such as traffic accidents can occur, and the like. Therefore, it is worth discussing how to further improve the utilization rate of the unmanned vehicle.
Therefore, the application provides a control method of the unmanned vehicle, and the invention has the following conception: the development of network taxi reservation gradually replaces the traditional roadside blind taxi taking mode. The network taxi appointment system uses the Internet technology and the communication technology to give the taxi users and the users who provide the taxi service a special goal. The two parties do not look for the target blindly any more, and the taxi taking efficiency of people is improved. As such, driverless and rental services may be associated. In view of this, in the embodiment of the present application, an image of an environment around the unmanned vehicle is obtained, a target object with a vehicle using intention is analyzed from the image, then the unmanned vehicle is controlled to take over the target object according to the position information of the target object, after the taking over is successful, a destination of the target object and a driving route to the destination are determined by interacting with the target object, and finally the unmanned vehicle is controlled to drive to the destination according to the driving route, so that the work efficiency can be improved compared with the passenger taking over by the driver driving the vehicle.
After the inventive concept of the embodiment of the present application is introduced, some simple descriptions are made below on application scenarios to which the technical solution of the embodiment of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiment of the present application and are not limited. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Reference is made to fig. 1, which is a schematic view of an application scenario of a control method of an unmanned vehicle according to an embodiment of the present application. The application scene comprises an unmanned vehicle 101, a pedestrian 1, a pedestrian 2, a pedestrian 3 and a pedestrian 4, wherein the unmanned vehicle 101 is internally provided with an on-board terminal.
The vehicle-mounted terminal acquires an image of the surrounding environment of the vehicle sent by a camera in the unmanned vehicle 101, as shown in fig. 1, the image includes a pedestrian 1, a pedestrian 2, a pedestrian 3 and a pedestrian 4, the vehicle-mounted terminal analyzes a target object with a vehicle using intention, namely the pedestrian 2, from the image, then controls the unmanned vehicle 101 to travel from current position information A to position information B according to position information B of the pedestrian 2, so that the unmanned vehicle 101 can pick up the pedestrian 2, and after the picking up succeeds, the vehicle-mounted terminal acquires interaction information of the unmanned vehicle 101 and the pedestrian 2, further can determine position information C of the pedestrian 2 and a travel route to the destination according to the interaction information, and controls the unmanned vehicle 101 to travel to the destination, namely the position information C according to the travel route.
Alternatively, the application scenario may also include the server 102. The vehicle-mounted terminal transmits the acquired image to the server 102 for analysis processing, and controls the unmanned vehicle 101 to execute the method, or the vehicle-mounted terminal and the server 102 cooperatively execute the method.
The vehicle-mounted terminal built in the unmanned vehicle 101 is connected to the server 102 through a wireless or wired network, and the server 102 may be a server, a server cluster formed by a plurality of servers, or a cloud computing center. The server 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like.
Of course, the method provided in the embodiment of the present application is not limited to be used in the application scenario shown in fig. 1, and may also be used in other possible application scenarios, and the embodiment of the present application is not limited. The functions that can be implemented by each device in the application scenario shown in fig. 1 will be described in the following method embodiments, and will not be described in detail herein.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide the method operation steps as shown in the following embodiments or figures, more or less operation steps may be included in the method based on the conventional or non-inventive labor. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application.
The following describes the technical solution provided in the embodiment of the present application with reference to the application scenario shown in fig. 1.
Referring to fig. 2, an embodiment of the present application provides a control method of an unmanned vehicle, including the steps of:
s201, acquiring an image of the surrounding environment of the unmanned vehicle.
Optionally, the unmanned vehicle comprises a forward looking camera and a look around camera, which respectively obtain images collected by the forward looking camera and the look around camera.
Illustratively, the front view camera and the look-around camera may be installed at preset positions on a roof of the unmanned vehicle, and the front view camera may capture images in a longer distance range and the look-around camera may capture images in a shorter distance range.
The images acquired by the front-view camera and the all-round-view camera can be acquired, so that the acquired images of the surrounding environment of the unmanned vehicle are more comprehensive, and a target object can be determined more accurately.
Here, the unmanned vehicle may further include a camera with other functions such as a rear view camera, which is only an example, and is not limited to a specific camera type, and may be adjusted according to an actual application situation.
S202, the target object with the car using intention is analyzed from the image.
And S203, controlling the unmanned vehicle to take over the target object according to the position information of the target object.
And S204, after the pickup is successful, interacting with the target object to determine the destination of the target object and the driving route to the destination.
And S205, controlling the unmanned vehicle to travel to the destination according to the travel route.
Specifically, when the unmanned vehicle includes a front-view camera and a surround-view camera, after images collected by the front-view camera and the surround-view camera are respectively acquired, as shown in fig. 3, a method for screening a target object from the images is shown, including:
s301, extracting a first candidate target object with a specified vehicle using posture in an image acquired by a forward-looking camera by adopting a pre-trained first target extraction model to obtain a first candidate target object set; and extracting a second candidate target object with a specified vehicle using posture in an image acquired by the panoramic camera by adopting a pre-trained second target extraction model to obtain a second candidate target object set.
Here, the execution order of extracting the first candidate target object by the first target extraction model and extracting the second candidate target object by the second target extraction model is not limited.
Optionally, the designated in-car pose is facing the unmanned vehicle and has a designated gesture. The designated gesture may be a waving gesture or other preset gestures, such as an ok gesture, a waving gesture, and the like. It is also possible here to set the angle of the target object relative to the unmanned vehicle within a preset angle range to face the unmanned vehicle.
By defining a designated vehicle-using posture to face the unmanned vehicle and having a designated gesture, the target object can be determined more accurately.
Wherein the first target extraction model is trained according to the following method:
acquiring a first training sample, wherein the first training sample comprises a first sample image and a specified vehicle using posture related to the first sample image; inputting the first sample image into a first target extraction model to obtain the predicted posture of a target object in the first sample image output by the first target extraction model; parameters of the first target extraction model are trained based on a loss between the predicted pose of the target object and the specified pose of the first sample image.
And training a second target extraction model according to the following method:
acquiring a second training sample, wherein the second training sample comprises a second sample image and a specified vehicle using posture related to the second sample image; inputting the second sample image into a second target extraction model to obtain the predicted posture of the target object in the second sample image output by the second target extraction model; parameters of the second target extraction model are trained based on a loss between the predicted pose of the target object and the assigned car pose of the second sample image.
For example, the first target extraction model and the second target extraction model may be a YOLOv3 model, which is only an example, and the application does not limit the specific target extraction model and may be adjusted according to the actual application.
Illustratively, taking the training process of the first target extraction model as an example, a great number of images of pedestrians with waving motions are collected and labeled, as shown in fig. 4(a), the overall frame of the pedestrians with waving motions in the images and the orientation of the pedestrians facing the unmanned vehicle are labeled, wherein, as shown in fig. 4(b), the orientation of the pedestrians facing the unmanned vehicle is at θ1Within range, the pedestrian may be considered to be facing the unmanned vehicle, and the orientation of the pedestrian facing the unmanned vehicle is at θ2In the range or theta3Within the range, the pedestrian may be considered to face the unmanned vehicle at the side of the pedestrian, and the orientation when the pedestrian faces the unmanned vehicle is at θ4When within range, the pedestrian may be considered to be facing away from the unmanned vehicle.
After the images in the collected training sample set are marked according to the above method, the images are input into the first target extraction model, and the output result is a vector, for example, the vector can be represented by [ x, y, w, h, c, th0, th1, th2 [ ], the output result is a vector]The position information of the marked pedestrian is obtained through (x, y, w, h) in the vector, in one embodiment, the vertex coordinates of the detection frame of the marked pedestrian can be obtained through x and y in the vector, and the corresponding width and height of the detection frame can be obtained through w and h in the vector. The probability of the launching and waving actions of the pedestrian predicted to be marked is obtained through c in the vector, and the direction of the pedestrian predicted to be marked is obtained through th0 in the vector to face the unmanned vehicle at theta1Probability in range, predicted pedestrian facing unmanned vehicle orientation at θ via th1 in vector2In the range or theta3Probability in range, predicted pedestrian facing unmanned vehicle orientation at θ via th2 in vector4Probability within a range.
Optionally, the first target extraction model is trained based on the output result and the loss of the labeled data of the training sample. In one embodiment, the loss value of the first target extraction model is obtained by calculating the error of each value in the vector, and the first target extraction model is trained by using the loss value. The training process of the second target extraction model may refer to the training process of the first target extraction model.
S302, screening the same candidate target object in the first candidate target object set and the second candidate target object set, and filtering the same candidate target object from the second candidate target object set.
In some embodiments, the location information of each first candidate target object in the first set of candidate target objects is determined separately; and determining the position information of each second candidate target object in the second candidate target object set respectively; comparing the position information of each first candidate target object in the first candidate target object set with the position information of each second candidate target object in the second candidate target object set one by one; and taking the first candidate target object and the second candidate target object with the position difference within a preset distance range as the same candidate target object.
Illustratively, the image shown in fig. 5(a) is an image captured by a front-view camera, the image shown in fig. 5(b) is an image captured by a front-view camera, the first set of candidate target objects determined from the image captured by the front-view camera includes candidate target object 1 and candidate target object 2, and the second set of candidate target objects determined from the image captured by the front-view camera includes candidate target object 3 and candidate target object 4. And acquiring the position information of each candidate target object in the image where the candidate target object is positioned, and determining the position information of each candidate target object based on the mapping relation between the image position information and the real position information. And comparing the position information of any one candidate target object with the position information of other candidate target objects, and if the position difference between the candidate target object 1 and the candidate target object 4 is within a preset range, determining that the candidate target object 1 and the candidate target object 4 are the same candidate target object.
Here, the same candidate target object in the first candidate target object set and the second candidate target object set is filtered from the second candidate target object set, so that the efficiency of screening the target object can be improved.
And S303, screening out candidate target objects which meet a preset position relation with the unmanned vehicle as target objects with vehicle using intentions based on the first candidate target object set and the screened second candidate target object set.
The preset position relationship may be a shortest distance. By determining the candidate target object with the shortest distance as the target object with the vehicle using intention, the pickup efficiency of the unmanned vehicle can be improved.
In an embodiment of the application, the same candidate target objects in the first candidate target object set and the second candidate target object set may be filtered from the first candidate target object set, and based on the second candidate target object set and the filtered first candidate target object set, the candidate target object satisfying the preset position relationship with the unmanned vehicle is filtered out as the target object with the vehicle using intention.
In an embodiment of the application, when images acquired by the front-view camera and the all-around camera are video frames, the number of the video frames in which each first candidate target object and each second candidate target object are located is at least two.
Here, the number of the candidate target object video frames is ensured to be at least two, so that the accuracy of the screened target object with the intention of using the vehicle can be improved.
After the target object is screened out, when step S203 is executed, the unmanned vehicle is controlled to travel to a specified range around the position information of the target object; when the fact that the unmanned vehicle is located in a specified range around the position information of the target object is detected, whether the target object takes the car or not is determined through voice interaction; and if the target object is determined to be ridden, indicating the target object to get on the bus.
In an embodiment of the application, whether the target object is taken by the bus or not can be verified by displaying a preset gesture on a display on the unmanned vehicle or by setting a button on the unmanned vehicle. And if the collected voice of the target object is returned to the riding, or the collected target object shows a preset gesture to determine the riding, or the target object triggers a button to select the riding, indicating the target object to get on the bus. If the collected voice reply of the target object is no car, or the voice reply of the target object is not collected, the target object is determined not to take the car, and voice reply can be set, such as 'welcome to take the car later', or 'welcome to take the car later' is displayed through a display on the unmanned vehicle.
Optionally, after detecting that the unmanned vehicle is located in the specified range around the target object location information, a preset gesture, such as an OK gesture, may be displayed on a display screen provided outside the unmanned vehicle, and if it is detected that the target object makes a preset gesture correspondingly, the target object is considered to have a vehicle intention; or a riding button is arranged outside the unmanned vehicle, and if the riding button is selected and pressed by the target object, the purpose of the target object is confirmed. Here, the gesture corresponding to the target object may be detected by a model trained in advance.
The method comprises the steps of firstly determining the position information of an image of a target object respectively collected in a look-around camera and an image collected in a look-ahead camera, and then determining the actual position information of the target object according to the mapping relation between the images and the actual distance.
Optionally, after the pickup is successful, the target object can be prompted to fasten the safety belt through voice, whether the safety belt is fastened after the voice prompt of the target object is detected through a camera built in the unmanned vehicle, if the target object is detected not to fasten the safety belt, the target object is prompted to fasten the safety belt through voice again, and if the target object is detected to fasten the safety belt, the destination of the target object is determined through voice interaction.
In an embodiment of the present application, when step S204 is executed, the destination of the target object is determined by receiving and recognizing the voice information of the target object; displaying the destination of the target object through a display screen built in the unmanned vehicle, and planning at least one candidate route after receiving a destination confirmation instruction of the target object; one candidate route is selected as the travel route.
Optionally, in response to an indication of selection of at least one candidate route by the target object, the selected candidate route is taken as the driving route.
In an embodiment of the application, after the step S205 is executed, the method further includes:
and requesting a target object payment order, and prompting the target object to evaluate the service after confirming that the target object payment is completed.
The service process can be continuously improved by collecting the evaluation of the target object on the service and the image information and the voice information in the process of receiving the driving corresponding to the service, so that the unmanned vehicle can be better utilized to receive the driving.
The control method of the unmanned vehicle comprises the steps of obtaining an image of the surrounding environment of the unmanned vehicle, analyzing a target object with a vehicle using intention from the image, controlling the unmanned vehicle to carry out receiving driving on the target object according to position information of the target object, interacting with the target object after the receiving driving is successful to determine a destination of the target object and a driving route to the destination, and finally controlling the unmanned vehicle to drive to the destination according to the driving route, so that the utilization rate of the unmanned vehicle can be improved.
As shown in fig. 6, based on the same inventive concept as the above-described control method of the unmanned vehicle, the present embodiment also provides a control apparatus 60 of the unmanned vehicle, including: an acquisition module 601, an analysis module 602, a first control module 603, a determination module 604, and a second control module 605.
An obtaining module 601, configured to obtain an image of an environment around the unmanned vehicle;
an analysis module 602, configured to analyze a target object with a car usage intention from the image;
the first control module 603 is configured to control the unmanned vehicle to pick up the target object according to the position information of the target object;
the determining module 604 is configured to interact with the target object to determine a destination of the target object and a driving route to the destination after the pickup is successful;
a second control module 605 for controlling the unmanned vehicle to travel to the destination according to the travel route.
In an embodiment of the present application, the unmanned vehicle includes a forward-looking camera and a look-around camera, and the obtaining module 601 is configured to:
and respectively acquiring images collected by the forward-looking camera and the all-round looking camera.
In an embodiment of the present application, the analysis module 602 is configured to:
extracting a first candidate target object with a specified vehicle-using posture in an image acquired by the forward-looking camera by adopting a pre-trained first target extraction model to obtain a first candidate target object set; extracting a second candidate target object with the appointed vehicle using posture in the image acquired by the all-round-looking camera by adopting a pre-trained second target extraction model to obtain a second candidate target object set;
screening the same candidate target object in the first candidate target object set and the second candidate target object set, and filtering the same candidate target object from the second candidate target object set;
and screening out candidate target objects which meet a preset position relation with the unmanned vehicle as the target objects with the vehicle using intention based on the first candidate target object set and the screened second candidate target object set.
In an embodiment of the application, the analysis module 602 is further configured to:
determining position information of each first candidate target object in the first candidate target object set respectively; and determining the position information of each second candidate target object in the second candidate target object set respectively;
comparing the position information of each first candidate target object in the first candidate target object set with the position information of each second candidate target object in the second candidate target object set one by one;
and taking the first candidate target object and the second candidate target object with the position difference within a preset distance range as the same candidate target object.
In an embodiment of the application, the designated vehicle-using posture is facing the unmanned vehicle and has a designated gesture.
In an embodiment of the application, when the images acquired by the forward-looking camera and the look-around camera are both video frames, the number of the video frames in which each of the first candidate target object and the second candidate target object is located is at least two.
In an embodiment of the application, the preset position relationship is a shortest distance.
In an embodiment of the present application, the apparatus 60 further includes:
a first training module to train the first target extraction model according to the following method:
obtaining a first training sample, wherein the first training sample comprises a first sample image and a specified vehicle using posture related to the first sample image;
inputting the first sample image into the first target extraction model to obtain a predicted posture of a target object in the first sample image output by the first target extraction model;
training parameters of the first target extraction model based on a loss between the predicted pose of the target object and the designated car pose of the first sample image.
In an embodiment of the present application, the apparatus 60 further includes:
a second training module for training the second target extraction model according to the following method:
acquiring a second training sample, wherein the second training sample comprises a second sample image and a specified vehicle using posture related to the second sample image;
inputting the second sample image into the second target extraction model to obtain a predicted posture of a target object in the second sample image output by the second target extraction model;
training parameters of the second target extraction model based on a loss between the predicted pose of the target object and the designated car pose of the second sample image.
In an embodiment of the present application, the first control module 603 is configured to:
controlling the unmanned vehicle to travel to a specified range around the position information of the target object;
when the unmanned vehicle is detected to be located in a specified range around the position information of the target object, determining whether the target object takes the vehicle or not through voice interaction;
and if the target object is determined to take the vehicle, indicating the target object to get on the vehicle.
In an embodiment of the present application, the determining module 604 is configured to:
receiving and recognizing voice information of the target object, and determining a destination of the target object;
displaying the destination of the target object through a display screen built in the unmanned vehicle, and planning at least one candidate route after receiving a destination confirmation instruction of the target object;
a candidate route is selected as the travel route.
In an embodiment of the application, the determining module 604 is further configured to:
in response to an indication of selection of the at least one candidate route by the target object, treating the selected candidate route as the driving route.
In an embodiment of the present application, the second control module 605 is further configured to:
requesting the target object to pay an order;
and after the payment of the target object is confirmed to be completed, prompting the target object to evaluate the service.
The control device of the unmanned vehicle and the control method of the unmanned vehicle provided by the embodiment of the application adopt the same inventive concept, can obtain the same beneficial effects, and are not repeated herein.
Based on the same inventive concept as the control method of the unmanned vehicle, an embodiment of the present application further provides an electronic device, which may be specifically a desktop computer, a portable computer, a smart phone, a tablet computer, a Personal Digital Assistant (PDA), a server, and the like. As shown in fig. 7, the electronic device 70 may include a processor 701 and a memory 702.
The Processor 701 may be a general-purpose Processor, such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Memory 702, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charged Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 702 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; the computer storage media may be any available media or data storage device that can be accessed by a computer, including but not limited to: various media that can store program codes include a removable Memory device, a Random Access Memory (RAM), a magnetic Memory (e.g., a flexible disk, a hard disk, a magnetic tape, a magneto-optical disk (MO), etc.), an optical Memory (e.g., a CD, a DVD, a BD, an HVD, etc.), and a semiconductor Memory (e.g., a ROM, an EPROM, an EEPROM, a nonvolatile Memory (NAND FLASH), a Solid State Disk (SSD)).
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media that can store program codes include a removable Memory device, a Random Access Memory (RAM), a magnetic Memory (e.g., a flexible disk, a hard disk, a magnetic tape, a magneto-optical disk (MO), etc.), an optical Memory (e.g., a CD, a DVD, a BD, an HVD, etc.), and a semiconductor Memory (e.g., a ROM, an EPROM, an EEPROM, a nonvolatile Memory (NAND FLASH), a Solid State Disk (SSD)).
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method of controlling an unmanned vehicle provided in the various alternative implementations described above.
The above embodiments are only used to describe the technical solutions of the present application in detail, but the above embodiments are only used to help understanding the method of the embodiments of the present application, and should not be construed as limiting the embodiments of the present application. Modifications and substitutions that may be readily apparent to those skilled in the art are intended to be included within the scope of the embodiments of the present application.

Claims (28)

1. A control method of an unmanned vehicle, characterized by comprising:
acquiring an image of an environment surrounding the unmanned vehicle;
analyzing a target object with a car using intention from the image;
controlling the unmanned vehicle to take over the target object according to the position information of the target object;
after the pickup is successful, interacting with the target object to determine the destination of the target object and a driving route to the destination;
controlling the unmanned vehicle to travel to the destination according to the travel route.
2. The method of claim 1, wherein the unmanned vehicle comprises a forward looking camera and a look around camera, and wherein obtaining the image of the environment surrounding the unmanned vehicle comprises:
and respectively acquiring images collected by the forward-looking camera and the all-round looking camera.
3. The method of claim 2, wherein said analyzing the target object with the vehicular intent from the image comprises:
extracting a first candidate target object with a specified vehicle-using posture in an image acquired by the forward-looking camera by adopting a pre-trained first target extraction model to obtain a first candidate target object set; extracting a second candidate target object with the appointed vehicle using posture in the image acquired by the all-round-looking camera by adopting a pre-trained second target extraction model to obtain a second candidate target object set;
screening the same candidate target object in the first candidate target object set and the second candidate target object set, and filtering the same candidate target object from the second candidate target object set;
and screening out candidate target objects which meet a preset position relation with the unmanned vehicle as the target objects with the vehicle using intention based on the first candidate target object set and the screened second candidate target object set.
4. The method of claim 3, wherein the screening for the same candidate target object in the first set of candidate target objects and the second set of candidate target objects comprises:
determining position information of each first candidate target object in the first candidate target object set respectively; and determining the position information of each second candidate target object in the second candidate target object set respectively;
comparing the position information of each first candidate target object in the first candidate target object set with the position information of each second candidate target object in the second candidate target object set one by one;
and taking the first candidate target object and the second candidate target object with the position difference within a preset distance range as the same candidate target object.
5. The method of claim 3, wherein the designated vehicle-using pose is facing the unmanned vehicle and having a designated gesture.
6. The method of claim 3, wherein when the images captured by the forward-looking camera and the look-around camera are both video frames, the number of video frames in which each of the first candidate target object and each of the second candidate target object is located is at least two frames.
7. The method according to claim 3, wherein the preset positional relationship is a shortest distance.
8. The method of claim 3, wherein the first target extraction model is trained according to the following method:
obtaining a first training sample, wherein the first training sample comprises a first sample image and a specified vehicle using posture related to the first sample image;
inputting the first sample image into the first target extraction model to obtain a predicted posture of a target object in the first sample image output by the first target extraction model;
training parameters of the first target extraction model based on a loss between the predicted pose of the target object and the designated car pose of the first sample image.
9. The method of claim 3, wherein the second target extraction model is trained according to the following method:
acquiring a second training sample, wherein the second training sample comprises a second sample image and a specified vehicle using posture related to the second sample image;
inputting the second sample image into the second target extraction model to obtain a predicted posture of a target object in the second sample image output by the second target extraction model;
training parameters of the second target extraction model based on a loss between the predicted pose of the target object and the designated car pose of the second sample image.
10. The method of claim 1, wherein the controlling the unmanned vehicle to pickup the target object according to the location information of the target object comprises:
controlling the unmanned vehicle to travel to a specified range around the position information of the target object;
when the unmanned vehicle is detected to be located in a specified range around the position information of the target object, determining whether the target object takes the vehicle or not through voice interaction;
and if the target object is determined to take the vehicle, indicating the target object to get on the vehicle.
11. The method of claim 1, wherein interacting with the target object to determine the destination of the target object and the driving route to the destination after the pickup is successful comprises:
receiving and recognizing voice information of the target object, and determining a destination of the target object;
displaying the destination of the target object through a display screen built in the unmanned vehicle, and planning at least one candidate route after receiving a destination confirmation instruction of the target object;
a candidate route is selected as the travel route.
12. The method of claim 11, wherein the selecting a candidate travel route as the travel route comprises:
in response to an indication of selection of the at least one candidate route by the target object, treating the selected candidate route as the driving route.
13. The method of claim 1, wherein after the controlling the unmanned vehicle to travel to the destination according to the travel route, the method further comprises:
requesting the target object to pay an order;
and after the payment of the target object is confirmed to be completed, prompting the target object to evaluate the service.
14. A control device of an unmanned vehicle, characterized by comprising:
an acquisition module for acquiring an image of an environment surrounding the unmanned vehicle;
the analysis module is used for analyzing a target object with the car using intention from the image;
the first control module is used for controlling the unmanned vehicle to carry out pickup drive on the target object according to the position information of the target object;
the determining module is used for interacting with the target object to determine the destination of the target object and a driving route to the destination after the pickup is successful;
and the second control module is used for controlling the unmanned vehicle to travel to the destination according to the travel route.
15. The apparatus of claim 14, wherein the unmanned vehicle comprises a forward looking camera and a look around camera, the acquisition module to:
and respectively acquiring images collected by the forward-looking camera and the all-round looking camera.
16. The apparatus of claim 15, wherein the analysis module is configured to:
extracting a first candidate target object with a specified vehicle-using posture in an image acquired by the forward-looking camera by adopting a pre-trained first target extraction model to obtain a first candidate target object set; extracting a second candidate target object with the appointed vehicle using posture in the image acquired by the all-round-looking camera by adopting a pre-trained second target extraction model to obtain a second candidate target object set;
screening the same candidate target object in the first candidate target object set and the second candidate target object set, and filtering the same candidate target object from the second candidate target object set;
and screening out candidate target objects which meet a preset position relation with the unmanned vehicle as the target objects with the vehicle using intention based on the first candidate target object set and the screened second candidate target object set.
17. The apparatus of claim 16, wherein the analysis module is further configured to:
determining position information of each first candidate target object in the first candidate target object set respectively; and determining the position information of each second candidate target object in the second candidate target object set respectively;
comparing the position information of each first candidate target object in the first candidate target object set with the position information of each second candidate target object in the second candidate target object set one by one;
and taking the first candidate target object and the second candidate target object with the position difference within a preset distance range as the same candidate target object.
18. The apparatus of claim 16, wherein the designated vehicle-using pose is facing the unmanned vehicle and having a designated gesture.
19. The apparatus of claim 16, wherein when the images captured by the forward-looking camera and the look-around camera are both video frames, the number of video frames in which each of the first candidate target object and each of the second candidate target object is located is at least two frames.
20. The apparatus according to claim 16, wherein the predetermined positional relationship is a shortest distance.
21. The apparatus of claim 16, further comprising:
a first training module to train the first target extraction model according to the following method:
obtaining a first training sample, wherein the first training sample comprises a first sample image and a specified vehicle using posture related to the first sample image;
inputting the first sample image into the first target extraction model to obtain a predicted posture of a target object in the first sample image output by the first target extraction model;
training parameters of the first target extraction model based on a loss between the predicted pose of the target object and the designated car pose of the first sample image.
22. The apparatus of claim 16, further comprising:
a second training module for training the second target extraction model according to the following method:
acquiring a second training sample, wherein the second training sample comprises a second sample image and a specified vehicle using posture related to the second sample image;
inputting the second sample image into the second target extraction model to obtain a predicted posture of a target object in the second sample image output by the second target extraction model;
training parameters of the second target extraction model based on a loss between the predicted pose of the target object and the designated car pose of the second sample image.
23. The apparatus of claim 14, wherein the first control module is configured to:
controlling the unmanned vehicle to travel to a specified range around the position information of the target object;
when the unmanned vehicle is detected to be located in a specified range around the position information of the target object, determining whether the target object takes the vehicle or not through voice interaction;
and if the target object is determined to take the bus, carrying out pickup on the target object.
24. The apparatus of claim 14, wherein the determining module is configured to:
receiving and recognizing voice information of the target object, and determining a destination of the target object;
displaying the destination of the target object through a display screen built in the unmanned vehicle, and planning at least one candidate route after receiving a destination confirmation instruction of the target object;
a candidate route is selected as the travel route.
25. The apparatus of claim 24, wherein the determining module is further configured to:
in response to an indication of selection of the at least one candidate route by the target object, treating the selected candidate route as the driving route.
26. The apparatus of claim 14, wherein the second control module is further configured to:
requesting the target object to pay an order;
and after the payment of the target object is confirmed to be completed, prompting the target object to evaluate the service.
27. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 13 are implemented when the computer program is executed by the processor.
28. A computer-readable storage medium having computer program instructions stored thereon, which, when executed by a processor, implement the steps of the method of any one of claims 1 to 13.
CN202011410339.0A 2020-12-03 2020-12-03 Method and device for controlling unmanned vehicle, electronic device and storage medium Active CN112477886B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011410339.0A CN112477886B (en) 2020-12-03 2020-12-03 Method and device for controlling unmanned vehicle, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011410339.0A CN112477886B (en) 2020-12-03 2020-12-03 Method and device for controlling unmanned vehicle, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112477886A true CN112477886A (en) 2021-03-12
CN112477886B CN112477886B (en) 2022-03-01

Family

ID=74939561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011410339.0A Active CN112477886B (en) 2020-12-03 2020-12-03 Method and device for controlling unmanned vehicle, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112477886B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109643494A (en) * 2017-04-14 2019-04-16 松下电器(美国)知识产权公司 Automatic driving vehicle, the parking method of automatic driving vehicle and program
CN109715443A (en) * 2016-09-16 2019-05-03 奥迪股份公司 Method for running motor vehicle
CN110320911A (en) * 2019-07-01 2019-10-11 百度在线网络技术(北京)有限公司 Unmanned vehicle control method, device, unmanned vehicle and storage medium
US20200363825A1 (en) * 2018-02-09 2020-11-19 Denso Corporation Pickup system
CN111976744A (en) * 2020-08-20 2020-11-24 东软睿驰汽车技术(沈阳)有限公司 Control method and device based on taxi taking and automatic driving automobile

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109715443A (en) * 2016-09-16 2019-05-03 奥迪股份公司 Method for running motor vehicle
CN109643494A (en) * 2017-04-14 2019-04-16 松下电器(美国)知识产权公司 Automatic driving vehicle, the parking method of automatic driving vehicle and program
US20190212738A1 (en) * 2017-04-14 2019-07-11 Panasonic Intellectual Property Corporation Of America Autonomous driving vehicle, method of stopping autonomous driving vehicle, and recording medium
US20200363825A1 (en) * 2018-02-09 2020-11-19 Denso Corporation Pickup system
CN110320911A (en) * 2019-07-01 2019-10-11 百度在线网络技术(北京)有限公司 Unmanned vehicle control method, device, unmanned vehicle and storage medium
CN111976744A (en) * 2020-08-20 2020-11-24 东软睿驰汽车技术(沈阳)有限公司 Control method and device based on taxi taking and automatic driving automobile

Also Published As

Publication number Publication date
CN112477886B (en) 2022-03-01

Similar Documents

Publication Publication Date Title
US10229333B1 (en) Apparatuses, systems and methods for determining vehicle operator distractions
US9881221B2 (en) Method and system for estimating gaze direction of vehicle drivers
US10223603B1 (en) Apparatuses, systems, and methods for determining when a vehicle occupant is using a mobile telephone
US11003932B1 (en) Apparatuses, systems, and methods for inferring a driving environment based on vehicle occupant actions
US10783360B1 (en) Apparatuses, systems and methods for vehicle operator gesture recognition and transmission of related gesture data
US10242274B1 (en) Apparatuses, systems and methods for determining degrees of risk associated with a vehicle operator
US11155269B1 (en) Apparatuses, systems and methods for determining distracted drivers associated with vehicle driving routes
US10282624B1 (en) Apparatuses, systems, and methods for determining when a vehicle operator is texting while driving
CN105355039A (en) Road condition information processing method and equipment
US9996757B1 (en) Apparatuses, systems, and methods for detecting various actions of a vehicle operator
US11501538B2 (en) Systems and methods for detecting vehicle tailgating
US20220375265A1 (en) Apparatuses, systems, and methods for detecting vehicle occupant actions
US10943136B1 (en) Apparatuses, systems and methods for generating a vehicle driver signature
CN113205088B (en) Obstacle image presentation method, electronic device, and computer-readable medium
US10891502B1 (en) Apparatuses, systems and methods for alleviating driver distractions
US10452933B1 (en) Apparatuses, systems and methods for generating a vehicle driver model for a particular vehicle
US11922705B2 (en) Apparatuses, systems and methods for generation and transmission of vehicle operation mode data
CN111382695A (en) Method and apparatus for detecting boundary points of object
CN112477886B (en) Method and device for controlling unmanned vehicle, electronic device and storage medium
CN112885087A (en) Method, apparatus, device and medium for determining road condition information and program product
CN114511834A (en) Method and device for determining prompt information, electronic equipment and storage medium
RU2793737C1 (en) Smart parking method and devices for its implementation
CN112215042A (en) Parking space limiter identification method and system and computer equipment
US20220148319A1 (en) Apparatuses, systems and methods for integrating vehicle operator gesture detection within geographic maps
US20230274586A1 (en) On-vehicle device, management system, and upload method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant