CN109270925B - Human-vehicle interaction method, device, equipment and storage medium - Google Patents

Human-vehicle interaction method, device, equipment and storage medium Download PDF

Info

Publication number
CN109270925B
CN109270925B CN201710580735.XA CN201710580735A CN109270925B CN 109270925 B CN109270925 B CN 109270925B CN 201710580735 A CN201710580735 A CN 201710580735A CN 109270925 B CN109270925 B CN 109270925B
Authority
CN
China
Prior art keywords
target person
vehicle
unmanned vehicle
image
subunit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710580735.XA
Other languages
Chinese (zh)
Other versions
CN109270925A (en
Inventor
姚发亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201710580735.XA priority Critical patent/CN109270925B/en
Publication of CN109270925A publication Critical patent/CN109270925A/en
Application granted granted Critical
Publication of CN109270925B publication Critical patent/CN109270925B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • G05D1/0263Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means using magnetic strips
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0285Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3226Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using a predetermined code, e.g. password, passphrase or PIN
    • H04L9/3231Biological data, e.g. fingerprint, voice or retina

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Computer Security & Cryptography (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a human-vehicle interaction method, a human-vehicle interaction device, human-vehicle interaction equipment and a storage medium, wherein the method comprises the following steps: when the user is determined to have the pick-up requirement, a pick-up instruction is generated, and the pick-up instruction carries the position information of the target person; and sending a pick-up instruction to the unmanned vehicle so that the unmanned vehicle automatically drives to the position of the target person. By applying the scheme of the invention, the performance of the unmanned vehicle can be improved.

Description

Human-vehicle interaction method, device, equipment and storage medium
[ technical field ] A method for producing a semiconductor device
The invention relates to the unmanned vehicle technology, in particular to a human-vehicle interaction method, a human-vehicle interaction device, human-vehicle interaction equipment and a storage medium.
[ background of the invention ]
With the development of technology, the application of unmanned vehicles is more and more extensive.
An unmanned vehicle, also called an autonomous vehicle, senses the environment around the vehicle through an on-vehicle sensing system, and controls the steering and speed of the vehicle according to the sensed road, vehicle position, obstacle information, and the like, so that the vehicle can safely and reliably travel on the road.
How to communicate with an unmanned vehicle quickly and effectively, that is, how to perform human-vehicle interaction effectively, is an important subject of current research, and is concerned about convenience of life of passengers and the like.
At present, human-vehicle interaction is mainly limited in a vehicle, for example, a user can set a destination through an interactive interface provided by a system, correspondingly, the unmanned vehicle can load the user to the destination through an automatic driving mode, and for example, the user can also switch between the automatic driving mode and a manual driving mode through the interactive interface.
Therefore, the existing man-vehicle interaction mode has great limitation, so that the performance of the unmanned vehicle is limited.
[ summary of the invention ]
In view of this, the invention provides a human-vehicle interaction method, a human-vehicle interaction device, human-vehicle interaction equipment and a storage medium, which can improve the performance of an unmanned vehicle.
The specific technical scheme is as follows:
a human-vehicle interaction method, comprising:
when the user is determined to have a person receiving demand, generating a person receiving instruction, wherein the person receiving instruction carries the position information of the target person;
and sending the pick-up instruction to an unmanned vehicle so that the unmanned vehicle can automatically drive to the position of the target person.
According to a preferred embodiment of the present invention, the position of the target person is a position set by the user;
or the position of the target character is the positioned position of the user.
According to a preferred embodiment of the present invention, the pick-up command further carries: and the image information of the target person is used for shooting the image of the target person after the unmanned vehicle runs to the position of the target person, and opening a vehicle door when the target person is determined to exist in the shot image according to the image information of the target person.
A human-vehicle interaction method, comprising:
acquiring a pick-up instruction from a user, wherein the pick-up instruction carries the position information of a target person;
and controlling the unmanned vehicle to automatically drive to the position of the target person.
According to a preferred embodiment of the present invention, the controlling the unmanned vehicle to automatically travel to the position of the target person includes:
acquiring the position of the unmanned vehicle;
planning a navigation path according to the position of the unmanned vehicle and the position of the target person;
and controlling the unmanned vehicle to drive to the position of the target character according to the planned navigation path.
According to a preferred embodiment of the present invention, the pick-up command further carries: image information of the target person;
the method further comprises the following steps:
when the unmanned vehicle runs to the position where the target person is located, shooting a target person image;
and determining whether the target person exists in the shot image or not according to the image information of the target person, and if so, opening the vehicle door.
According to a preferred embodiment of the invention, the method further comprises:
when the unmanned vehicle runs to the position of the target person, shooting the image of the target person;
and determining whether the person in the image information saved in advance exists in the shot image, and if so, opening the vehicle door.
A human-vehicle interaction device, comprising: an instruction generation unit and a first communication unit;
the instruction generating unit is used for generating a receiving instruction when the user is determined to have the receiving requirement, wherein the receiving instruction carries the position information of the target person, and the receiving instruction is sent to the first communication unit;
the first communication unit is used for sending the pick-up instruction to the unmanned vehicle so that the unmanned vehicle can automatically drive to the position of the target person.
According to a preferred embodiment of the present invention, the position of the target person is a position set by the user;
or the position of the target person is the positioned position of the user.
According to a preferred embodiment of the present invention, the pick-up command further carries: and the image information of the target person is used for shooting the image of the target person after the unmanned vehicle runs to the position of the target person, and when the target person is determined to exist in the shot image according to the image information of the target person, a vehicle door is opened.
A human-vehicle interaction device, comprising: a second communication unit and a vehicle control unit;
the second communication unit is used for acquiring a pick-up instruction from a user, wherein the pick-up instruction carries the position information of a target person, and the pick-up instruction is sent to the vehicle control unit;
and the vehicle control unit is used for controlling the unmanned vehicle to automatically drive to the position of the target person.
According to a preferred embodiment of the present invention, the vehicle control unit includes: the navigation system comprises a positioning subunit, a navigation subunit and a control subunit;
the control subunit is used for notifying the positioning subunit to execute the self function after the receiving instruction is obtained, and sending the position information of the target person to the navigation subunit;
the positioning subunit is used for positioning the position of the unmanned vehicle and sending the position of the unmanned vehicle to the navigation subunit;
the navigation subunit is used for planning a navigation path according to the position of the unmanned vehicle and the position of the target person and sending the navigation path to the control subunit;
the control subunit is further configured to control the unmanned vehicle to travel to the position of the target person according to the navigation path.
According to a preferred embodiment of the present invention, the pick-up command further carries: image information of the target person;
the vehicle control unit further includes: a storage subunit and a camera shooting subunit;
the control subunit is further configured to,
storing the image information of the target person carried in the pick-up instruction into the storage subunit;
and when the unmanned vehicle runs to the position of the target person, acquiring the image shot by the shooting subunit, determining whether the target person exists in the image shot by the shooting subunit according to the image information of the target person stored in the storage subunit, and if so, opening a vehicle door.
According to a preferred embodiment of the present invention, the vehicle control unit further includes: a storage subunit and a camera shooting subunit;
the control subunit is further configured to,
and when the unmanned vehicle runs to the position of the target person, acquiring the image shot by the shooting subunit, determining whether the image shot by the shooting subunit contains a person in the image information pre-stored in the storage subunit, and if so, opening the vehicle door.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as described above when executing the program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method as set forth above.
Based on the introduction, the scheme of the invention can generate the pick-up command when the user is determined to have the pick-up requirement, the pick-up command can carry the position information of the target character, and then the pick-up command can be sent to the unmanned vehicle, and accordingly the unmanned vehicle can automatically drive to the position of the target character.
[ description of the drawings ]
Fig. 1 is a flowchart of a human-vehicle interaction method according to a first embodiment of the present invention.
Fig. 2 is a flowchart of a human-vehicle interaction method according to a second embodiment of the present invention.
Fig. 3 is a flowchart of a human-vehicle interaction method according to a third embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a human-vehicle interaction device according to a first embodiment of the invention.
Fig. 5 is a schematic structural diagram of a human-vehicle interaction device according to a second embodiment of the invention.
FIG. 6 illustrates a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present invention.
[ detailed description ] A
In order to make the technical scheme of the invention more clear and understood, the scheme of the invention is further explained by referring to the attached drawings and embodiments.
It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a human-vehicle interaction method according to a first embodiment of the present invention, as shown in fig. 1, including the following specific implementation manners.
In 101, when it is determined that the user has a person receiving requirement, a person receiving instruction is generated, and the person receiving instruction carries the position information of the target person.
At 102, a pick-up command is sent to the unmanned vehicle so that the unmanned vehicle automatically travels to the location of the target person.
The user is typically the owner of the unmanned vehicle. When the user has the user's request, for example, need to let unmanned vehicle connect oneself, perhaps, need to let unmanned vehicle go to connect oneself friend etc. the intelligent terminal who uses can be informed by some mode, intelligent terminal can be cell-phone etc..
As can be seen, the target person may be the user itself, or may be another person.
Preferably, an App may be pre-installed on the mobile phone, and after the App is opened by the user, the user may input his/her request in a text or voice manner, for example, the user's request may be "come.
The position information can be extracted through semantic analysis and the like of the requirements input by the user, is taken as the position of the target person, is carried in the generated pick-up command and is sent to the unmanned vehicle.
Or, more simply, the user input request may include only the position information, so that processing such as semantic analysis is omitted, and the position information input by the user may be directly taken as the position of the target person, carried in the generated pick-up command, and transmitted to the unmanned vehicle.
In addition, if the user does not input position information, for example, the user inputs only 'pick-up me', the position of the user can be automatically located, and the located position of the user is taken as the position of a target person, carried in the generated pick-up command and sent to the unmanned vehicle.
The specific format of the generated pick-up command is not limited.
After the unmanned vehicle obtains the command of receiving the person, the unmanned vehicle can automatically drive to the position of the target person.
As described above, the contact instruction carries the location information of the target person, and on this basis, the contact instruction may further carry image information of the target person, for example, the App may display an "upload image" button, and the user may find and upload the image of the target person stored on the mobile phone according to the storage path.
Therefore, after the unmanned vehicle runs to the position of the target person, the image of the target person can be shot, whether the target person exists in the shot image can be determined according to the image information of the target person, and if yes, the vehicle door can be opened.
By the mode, the legality verification of the person getting on the vehicle is realized, and the safety of the unmanned vehicle is improved.
Fig. 2 is a flowchart of a human-vehicle interaction method according to a second embodiment of the present invention, as shown in fig. 2, including the following specific implementation manners.
In 201, a pick-up instruction from a user is obtained, where the pick-up instruction carries information of a location where a target person is located.
In 202, the unmanned vehicle is controlled to automatically travel to the location of the target person.
After a pick-up instruction from a user is obtained, the position information of the unmanned vehicle can be obtained firstly, for example, the position of the unmanned vehicle is positioned by utilizing a positioning module in the unmanned vehicle.
And then, planning a navigation path according to the position of the unmanned vehicle and the position of the target person. That is, according to the prior art, an optimal navigation path from the position of the unmanned vehicle to the position of the target person is planned.
The optimal navigation path may be a navigation path having the shortest travel distance, or may be a navigation path having the shortest travel time.
And then, controlling the unmanned vehicle to travel to the position of the target person according to the planned navigation path. Namely, the unmanned vehicle is controlled to automatically drive to the position of the target person according to the planned optimal navigation path.
As described above, the pick-up instruction may further carry image information of the target person, so that after the pick-up instruction is received, the image information of the target person may be stored, and subsequently, after the unmanned vehicle travels to the position of the target person, image capturing of the target person may be performed, and whether the target person exists in the captured image may be determined according to the image information of the target person, and if so, the vehicle door may be opened.
Specifically, a sensor (camera) mounted on a vehicle body can be used for shooting a target person image, the shot image can be subjected to face detection, the face detection algorithm can comprise a statistic-based algorithm, a structural feature-based algorithm and the like, wherein the statistic-based algorithm can comprise a face detection algorithm based on histogram rough segmentation and singular value features, a face detection algorithm based on binary wavelet transform and the like, and the structural feature-based algorithm can comprise a face detection algorithm based on AdaBoost algorithm, a face detection algorithm based on facial binocular structural features and the like.
For the shot image, the face detection can be carried out by only adopting one face detection algorithm, and in order to improve the detection effect, the face detection can also be carried out by adopting a mode of combining a plurality of face detection algorithms.
For example, the target person is located at a north gate of an office building, and there may be many people at this location during the off-duty time, so that the captured image may include multiple faces.
If the number of detected faces is one, the detected faces may be compared with the faces in the image information of the target person carried in the pick-up instruction, and if the detected faces are the same person, the door of the vehicle may be opened, thereby allowing the target person to get on the vehicle.
If the number of the detected faces is multiple, each detected face can be compared with the face in the image information of the target person carried in the pick-up command, and if a certain detected face is the same as the face in the image information of the target person carried in the pick-up command, the vehicle door can be opened, or the vehicle can be further driven to the front of the target person and then opened.
Through the mode, the legality verification of the person getting on the vehicle is realized, and therefore the safety of the unmanned vehicle is improved.
In addition, in order to simplify the operation, the user may not carry the image information of the target person in the pick-up command every time the pick-up command is sent, but may store some common image information in the unmanned vehicle in advance, such as image information of the user, image information of family, image information of good friends, and the like.
In this way, after the unmanned vehicle travels to the position of the target person and the image of the target person is captured, it is possible to determine whether or not the captured image includes a person in the image information stored in advance, and if so, it is possible to open the door.
With the above description taken together, fig. 3 is a flowchart of a human-vehicle interaction method according to a third embodiment of the present invention, and as shown in fig. 3, the method includes the following specific implementation manners.
In 301, when it is determined that the user has a person receiving requirement, a person receiving instruction is generated, where the person receiving instruction carries the position information of the target person and the image information of the target person.
In this embodiment, it is assumed that the receiving instruction simultaneously carries the position information of the target person and the image information of the target person.
At 302, a pick-up command is sent to the unmanned vehicle for processing in the manner shown at 303-309.
The pick-up command may be sent to the unmanned vehicle via wireless communication.
In 303, the location of the unmanned vehicle is obtained.
At 304, a navigation path is planned based on the location of the unmanned vehicle and the location of the target person.
In 305, the unmanned vehicle is controlled to travel to the position of the target person according to the planned navigation path.
At 306, the target person image capturing is performed.
In 307, it is determined whether a target person exists in the captured image based on the image information of the target person, and if so, 308 is performed, otherwise 309 is performed.
At 308, the door is opened, ending the process.
After the vehicle door is opened, the target person can enter the vehicle, and then can go to the destination of the vehicle through an automatic driving mode or a manual driving mode.
At 309, the door is not opened, and the process ends.
If the target person does not exist in the shot image, the vehicle door is not opened, and how to process the shot image can be determined according to actual needs.
For example, the waiting may be continued, and after waiting for a predetermined period of time, the target person image capturing may be performed again, and it is determined whether or not the target person is present in the captured image, and if so, the door of the vehicle may be opened.
Assuming that the unmanned vehicle reaches the position of the target person earlier than the time estimated by the target person, and the target person may temporarily go to other places nearby to do things for a while, the target person may return to receive the target person after waiting for a predetermined time.
For another example, the position of the target person may be obtained again, and the navigation and driving may be performed again to the position of the target person obtained again.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
In a word, by adopting the scheme of each method embodiment, when the user is not positioned in the unmanned vehicle, the unmanned vehicle can be interacted with the user, and the unmanned vehicle is required to automatically run to the specified position to receive the user, so that the performance of the unmanned vehicle is improved, the user can use the unmanned vehicle conveniently, and the safety of the unmanned vehicle is improved by verifying the legality of the person getting on the vehicle.
The applicable scenarios of the above embodiments of the method can be exemplified as follows:
1) Scene one
When a user goes to work, the unmanned vehicle is stopped at an underground parking lot of a unit office building, and when the user goes to work, a person receiving instruction can be sent to the unmanned vehicle to request the unmanned vehicle to receive the person from a north gate of the office building, so that the user does not need to go to an underground garage and search for the vehicle.
2) Scene two
When the user buys goods, a pick-up instruction can be sent to the unmanned vehicle to request the unmanned vehicle to pick up the goods at a certain appointed place, so that the trouble of searching a parking space for the vehicle by the user is avoided.
3) Scene three
When a user is on duty, the unmanned vehicle is stopped at an underground parking lot of a unit office building, the family A of the user is located at the position B, and the user can send a person receiving instruction to the unmanned vehicle to request the unmanned vehicle to travel to the position B to receive the family A when the user has the vehicle demand.
The above is a description of method embodiments, and the embodiments of the present invention are further described below by way of apparatus embodiments.
Fig. 4 is a schematic structural diagram of a first embodiment of the human-vehicle interaction device of the present invention, as shown in fig. 4, including: an instruction generation unit 401 and a first communication unit 402.
The instruction generating unit 401 is configured to generate a receiving instruction when it is determined that the user has a receiving requirement, where the receiving instruction carries information about a location of a target person, and send the receiving instruction to the first communication unit 402.
A first communication unit 402, configured to send a pick-up instruction to the unmanned vehicle, so that the unmanned vehicle automatically travels to the position of the target person.
The position of the target person can be set/input by the user, or can be automatically positioned. For example, if the user does not set the location of the target person, the location of the user may be located, and the located location of the user is used as the location of the target person.
The target person may be the user himself or another person, that is, the user may request the unmanned vehicle to receive himself or may request the unmanned vehicle to receive another person.
After the command generation unit 401 generates a contact command, it can be transmitted to the unmanned vehicle through the first communication unit 402, so that the unmanned vehicle can automatically travel to the position of the target person.
Besides carrying the position information of the target person, the contact instruction can further carry the image information of the target person.
In this way, the unmanned vehicle can shoot the target person image after driving to the position of the target person, and can determine whether the target person exists in the shot image according to the image information of the target person, if so, the door of the unmanned vehicle can be opened.
Fig. 5 is a schematic structural diagram of a human-vehicle interaction device according to a second embodiment of the present invention, as shown in fig. 5, including: a second communication unit 501 and a vehicle control unit 502.
The second communication unit 501 is configured to obtain a pick-up instruction from a user, where the pick-up instruction carries information about a location of a target person, and send the pick-up instruction to the vehicle control unit 502.
A vehicle control unit 502 for controlling the unmanned vehicle to automatically travel to the position of the target person.
The second communication unit 501 is used for communicating with the first communication unit 402 in the apparatus shown in fig. 4, for example, a wireless communication method can be adopted.
As shown in fig. 5, the vehicle control unit 502 may specifically include: a positioning subunit 5021, a navigation subunit 5022, and a control subunit 5023.
After the pick-up instruction is obtained, the control subunit 5023 notifies the positioning subunit 5021 to execute the self-function and sends the position information of the target person to the navigation subunit 5022.
And the positioning subunit 5021 is used for acquiring the position of the unmanned vehicle and sending the position of the unmanned vehicle to the navigation subunit 5022.
The navigation subunit 5022 is configured to plan a navigation path according to the position of the unmanned vehicle and the position of the target person, and send the navigation path to the control subunit 5023.
The control subunit 5023 is further used for controlling the unmanned vehicle to travel to the position of the target person according to the navigation path.
That is, after acquiring the pickup instruction from the user, the positioning subunit 5021 needs to acquire the position information of the unmanned vehicle, then the navigation subunit 5022 can plan a navigation path according to the position of the unmanned vehicle and the position of the target person, where the planned navigation path may be a navigation path with the shortest travel distance or a navigation path with the shortest travel time, and then the control subunit 5023 can control the unmanned vehicle to travel to the position of the target person according to the planned navigation path.
In practical application, the receiving instruction may further carry image information of the target person.
Accordingly, as shown in fig. 5, the vehicle control unit 502 may further include: a storage subunit 5024 and a camera subunit 5025.
The control subunit 5023 may store the image information of the target person carried in the pick-up command in the storage subunit 5024, acquire the image captured by the camera subunit 5025 after the unmanned vehicle has traveled to the position of the target person, determine whether the target person exists in the image captured by the camera subunit 5025 according to the image information of the target person stored in the storage subunit 5024, and if so, open the door of the vehicle.
For example, the image captured by the camera sub-unit 5025 may be subjected to face detection, and the number of detected faces may be one or multiple, for example, if the target person is located at the north gate of an office building, there may be many people at this location during the off-duty time, so the captured image may include multiple faces.
If the number of detected faces is one, the detected faces may be compared with the faces in the image information of the target person stored in the storage sub-unit 5024, and if it is the same person, the doors may be opened, thereby getting the target person in the vehicle.
If the number of detected faces is plural, each of the detected faces may be compared with the face in the image information of the target person stored in the storage subunit 5024, respectively, and if a certain detected face is the same person as the face in the image information of the target person stored in the storage subunit 5024, the door may be opened.
In addition, in order to simplify the operation, the user may not carry the image information of the target person in the pick-up command every time the pick-up command is sent, but may store some commonly used image information in the storage subunit 5024 in advance, such as image information of the user himself, image information of family, image information of good friends, and the like.
In this way, after the unmanned vehicle has traveled to the location of the target person, the image captured by the image capturing subunit 5025 can be acquired, and it is determined whether there is a person in the image information previously stored in the storage subunit 5024 in the image captured by the image capturing subunit 5025, and if so, the door can be opened.
For the specific work flow of the device embodiments shown in fig. 4 and fig. 5, please refer to the corresponding description in the foregoing method embodiments, which is not repeated.
In addition, in practical applications, the device shown in fig. 4 can be applied to a user side, and the device shown in fig. 5 can be applied to an unmanned vehicle side, and the two devices are matched with each other to work.
By adopting the scheme of the embodiment of the device, when the user is not positioned in the unmanned vehicle, the unmanned vehicle can be interacted with the same, the unmanned vehicle is required to automatically run to the designated position to receive the person, so that the performance of the unmanned vehicle is improved, the user can use the unmanned vehicle conveniently, and the safety of the unmanned vehicle is improved by verifying the legality of the person getting on the vehicle.
FIG. 6 illustrates a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present invention. The computer system/server 12 shown in FIG. 6 is only an example and should not be taken to limit the scope of use or the functionality of embodiments of the present invention in any way.
As shown in FIG. 6, computer system/server 12 is in the form of a general purpose computing device. The components of computer system/server 12 may include, but are not limited to: one or more processors (processing units) 16, a memory 28, and a bus 18 that connects the various system components, including the memory 28 and the processors 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer system/server 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The computer system/server 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may comprise a program product having a set (e.g., one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of embodiments of the invention as described.
The computer system/server 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with the computer system/server 12, and/or with any devices (e.g., network card, modem, etc.) that enable the computer system/server 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the computer system/server 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 20. As shown in FIG. 6, network adapter 20 communicates with the other modules of computer system/server 12 via bus 18. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the computer system/server 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 16 executes various functional applications and data processing by executing the program stored in the memory 28, for example, implementing the method in the embodiment shown in fig. 1, 2 or 3, that is, when it is determined that the user has a person-receiving requirement, generating a person-receiving instruction, where the person-receiving instruction carries information about the location of the target person, and sending the person-receiving instruction to the unmanned vehicle, so that the unmanned vehicle automatically travels to the location of the target person, and the like.
For specific implementation, please refer to the related descriptions in the foregoing embodiments, and further description is omitted.
The invention also discloses a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, will carry out the method as in the embodiments shown in fig. 1, 2 or 3.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method, etc., can be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (12)

1. A human-vehicle interaction method is characterized by comprising the following steps:
when determining that a user has a pick-up requirement, a man-vehicle interaction device generates a pick-up instruction, wherein the pick-up instruction carries position information of a target person and image information of the target person, and the user is a vehicle owner of an unmanned vehicle;
sending the pick-up instruction to the unmanned vehicle so that the unmanned vehicle performs a predetermined process, the predetermined process comprising: and automatically driving to the position of the target person, shooting the image of the target person, carrying out face detection on the shot image by adopting a mode of combining at least two face detection algorithms, respectively comparing the detected faces with the image information of the target person, driving to the front and back of the target person in response to the fact that any detected face and the target person are the same person, and opening a vehicle door if the target person exists in the image shot again after waiting for a preset time, or opening the vehicle door if the target person exists in the image shot again after waiting for the preset time, or acquiring the position of the target person again and driving to the position of the target person acquired again.
2. The method of claim 1,
the position of the target character is the position set by the user;
or the position of the target character is the positioned position of the user.
3. A human-vehicle interaction method is characterized by comprising the following steps:
the method comprises the steps that a man-vehicle interaction device obtains a pick-up instruction from a user, wherein the pick-up instruction carries position information of a target person and image information of the target person, and the user is an owner of an unmanned vehicle;
controlling the unmanned vehicle to automatically drive to the position of the target person;
the method further comprises the following steps: when the unmanned vehicle runs to the position of the target person, shooting the image of the target person, performing face detection on the shot image in a mode of combining at least two face detection algorithms, respectively comparing each detected face with the image information of the target person, responding to the fact that any detected face and the target person are the same person, driving to the front and back of the target person, and opening a vehicle door, otherwise, waiting for a preset time period, performing image shooting again, and if the target person is determined to exist in the shot image, opening the vehicle door, or acquiring the position of the target person again, and driving to the position of the target person acquired again.
4. The method of claim 3,
the controlling the unmanned vehicle to automatically travel to the location of the target person comprises:
acquiring the position of the unmanned vehicle;
planning a navigation path according to the position of the unmanned vehicle and the position of the target person;
and controlling the unmanned vehicle to drive to the position of the target character according to the planned navigation path.
5. A human-vehicle interaction device, comprising: an instruction generation unit and a first communication unit;
the instruction generating unit is used for generating a pick-up instruction when the user is determined to have a pick-up requirement, wherein the pick-up instruction carries the position information of a target person and the image information of the target person, and the pick-up instruction is sent to the first communication unit, and the user is a vehicle owner of the unmanned vehicle;
the first communication unit is configured to send the pick-up instruction to the unmanned vehicle so that the unmanned vehicle executes a predetermined process, where the predetermined process includes: and automatically driving to the position of the target person, shooting the image of the target person, performing face detection on the shot image by adopting a mode of combining at least two face detection algorithms, respectively comparing the detected faces with the image information of the target person, driving to the front and back of the target person to open a vehicle door in response to the fact that any detected face and the target person are the same person, or opening the vehicle door if the target person exists in the image shot again after waiting for a preset time, or re-acquiring the position of the target person and driving to the position of the re-acquired target person.
6. The apparatus of claim 5,
the position of the target character is the position set by the user;
or the position of the target person is the positioned position of the user.
7. A human-vehicle interaction device, comprising: a second communication unit and a vehicle control unit;
the second communication unit is used for acquiring a pick-up instruction from a user, wherein the pick-up instruction carries the position information of a target person and the image information of the target person, and the pick-up instruction is sent to the vehicle control unit, and the user is the owner of the unmanned vehicle;
the vehicle control unit is used for controlling the unmanned vehicle to automatically drive to the position of the target person, shooting the image of the target person, performing face detection on the shot image by adopting a mode of combining at least two face detection algorithms, respectively comparing the detected faces with the image information of the target person, driving to the front and back of the target person to open a vehicle door in response to the fact that any detected face and the target person are the same person, or else, performing image shooting again after waiting for a preset time, and opening the vehicle door if the target person exists in the shot image, or, re-acquiring the position of the target person and driving to the position of the re-acquired target person.
8. The apparatus of claim 7,
the vehicle control unit includes: the device comprises a positioning subunit, a navigation subunit, a control subunit, a storage subunit and a camera shooting subunit;
the control subunit is used for notifying the positioning subunit to execute the self function after the receiving instruction is obtained, and sending the position information of the target person to the navigation subunit;
the positioning subunit is used for positioning the position of the unmanned vehicle and sending the position of the unmanned vehicle to the navigation subunit;
the navigation subunit is used for planning a navigation path according to the position of the unmanned vehicle and the position of the target person and sending the navigation path to the control subunit;
the control subunit is further configured to control the unmanned vehicle to travel to the position of the target person according to the navigation path;
the control subunit is further configured to, after the unmanned vehicle travels to the position of the target person, acquire the image captured by the image capturing subunit, open the vehicle door if it is determined that the image captured by the image capturing subunit contains a person in the image information pre-stored in the storage subunit, otherwise, wait for a predetermined period of time, acquire the image captured by the image capturing subunit again, and open the vehicle door if it is determined that the image captured by the image capturing subunit contains a person in the image information pre-stored in the storage subunit.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-2 when executing the program.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1-2.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 3 to 4 when executing the program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 3 to 4.
CN201710580735.XA 2017-07-17 2017-07-17 Human-vehicle interaction method, device, equipment and storage medium Active CN109270925B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710580735.XA CN109270925B (en) 2017-07-17 2017-07-17 Human-vehicle interaction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710580735.XA CN109270925B (en) 2017-07-17 2017-07-17 Human-vehicle interaction method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109270925A CN109270925A (en) 2019-01-25
CN109270925B true CN109270925B (en) 2023-04-07

Family

ID=65147674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710580735.XA Active CN109270925B (en) 2017-07-17 2017-07-17 Human-vehicle interaction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109270925B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857151A (en) * 2020-07-31 2020-10-30 上汽通用汽车有限公司 Vehicle automatic pick-up method, storage medium and electronic device
CN111976744A (en) * 2020-08-20 2020-11-24 东软睿驰汽车技术(沈阳)有限公司 Control method and device based on taxi taking and automatic driving automobile

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004304731A (en) * 2003-04-01 2004-10-28 Nikon Gijutsu Kobo:Kk Camera system and on-vehicle computer system
CN1910633A (en) * 2003-08-15 2007-02-07 程滋颐 Vehicle safety defence warning system with face identification and wireless communication function
CN102662474A (en) * 2012-04-17 2012-09-12 华为终端有限公司 Terminal and method and device for controlling terminal
CN203503163U (en) * 2013-07-08 2014-03-26 重庆市城投金卡信息产业股份有限公司 Vehicle information detection system based on radio frequency identification technology
CN105466449A (en) * 2015-11-20 2016-04-06 沈阳美行科技有限公司 Playing method of vehicle environment information and device thereof
CN106446200A (en) * 2016-09-29 2017-02-22 北京百度网讯科技有限公司 Positioning method and device
CN106686013A (en) * 2017-03-10 2017-05-17 湖北天专科技有限公司 Identity recognition device for unmanned aerial vehicle, recognition system and recognition method thereof

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102497477B (en) * 2011-12-13 2014-08-27 芜湖罗比汽车照明系统有限公司 Cell phone system for vehicle monitoring
CN104123834A (en) * 2014-06-27 2014-10-29 北京艾亿沃德电子有限公司 Taxi calling management settlement system and method
JP6668814B2 (en) * 2015-03-23 2020-03-18 株式会社デンソー Automatic traveling control device and automatic traveling control system
CN105046942A (en) * 2015-06-05 2015-11-11 卢泰霖 Internet-based unmanned electric automobile service system
CN105069736A (en) * 2015-08-25 2015-11-18 北京丰华联合科技有限公司 Car rental management system aiming at automatic drive
CN105243864A (en) * 2015-10-30 2016-01-13 桂林市腾瑞电子科技有限公司 Intelligent control system of unmanned vehicle
CN105346483B (en) * 2015-11-04 2018-07-17 常州加美科技有限公司 A kind of man-machine interactive system of automatic driving vehicle
CN205573940U (en) * 2016-04-20 2016-09-14 哈尔滨理工大学 Vehicle control system and vehicle
CN106131171A (en) * 2016-06-30 2016-11-16 深圳益强信息科技有限公司 A kind of communication system
CN106128091A (en) * 2016-07-14 2016-11-16 陈智 Unmanned taxi system and carrying method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004304731A (en) * 2003-04-01 2004-10-28 Nikon Gijutsu Kobo:Kk Camera system and on-vehicle computer system
CN1910633A (en) * 2003-08-15 2007-02-07 程滋颐 Vehicle safety defence warning system with face identification and wireless communication function
CN102662474A (en) * 2012-04-17 2012-09-12 华为终端有限公司 Terminal and method and device for controlling terminal
CN203503163U (en) * 2013-07-08 2014-03-26 重庆市城投金卡信息产业股份有限公司 Vehicle information detection system based on radio frequency identification technology
CN105466449A (en) * 2015-11-20 2016-04-06 沈阳美行科技有限公司 Playing method of vehicle environment information and device thereof
CN106446200A (en) * 2016-09-29 2017-02-22 北京百度网讯科技有限公司 Positioning method and device
CN106686013A (en) * 2017-03-10 2017-05-17 湖北天专科技有限公司 Identity recognition device for unmanned aerial vehicle, recognition system and recognition method thereof

Also Published As

Publication number Publication date
CN109270925A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
EP3627180B1 (en) Sensor calibration method and device, computer device, medium, and vehicle
CN107336243B (en) Robot control system and control method based on intelligent mobile terminal
US11215996B2 (en) Method and device for controlling vehicle, device, and storage medium
US20180237137A1 (en) Voice Activated Unmanned Aerial Vehicle (UAV) Assistance System
CN106959690B (en) Method, device and equipment for searching unmanned vehicle and storage medium
CN113479192B (en) Vehicle parking-out method, vehicle parking-in method, device, equipment and storage medium
CN112581750B (en) Vehicle running control method and device, readable storage medium and electronic equipment
CN109270925B (en) Human-vehicle interaction method, device, equipment and storage medium
CN112837454A (en) Passage detection method and device, electronic equipment and storage medium
CN112650300A (en) Unmanned aerial vehicle obstacle avoidance method and device
US20240048653A1 (en) System and method for providing a notification that a mobile device is still in an autonomous vehicle after detecting an arrival at a destination
CN112085445A (en) Robot destination arrival determining method and device, electronic equipment and storage medium
CN114228702B (en) Method and device for parking passengers, storage medium and vehicle
CN114333404A (en) Vehicle searching method and device for parking lot, vehicle and storage medium
CN114734992A (en) Method, device, apparatus, storage medium and program product for autonomous parking
CN113705390A (en) Positioning method, positioning device, electronic equipment and storage medium
CN112116826A (en) Method and device for generating information
CN109543638A (en) A kind of face identification method, device, equipment and storage medium
CN114758521A (en) Parking lot departure guiding method and device, electronic equipment and storage medium
CN109572687A (en) It parks control method, device, electronic equipment and storage medium
CN113815605A (en) Control method, device, medium and electronic equipment for vehicle parking
CN104423570A (en) Misrecognition reducing motion recognition apparatus and method
CN107150691B (en) Stunt performance method, device and equipment for unmanned vehicle and storage medium
CN108732925B (en) Intelligent device and advancing control method and device thereof
JP6844385B2 (en) Matching system, management device, matching method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant