CN114013431B - Automatic parking control method and system based on user intention - Google Patents

Automatic parking control method and system based on user intention Download PDF

Info

Publication number
CN114013431B
CN114013431B CN202210007508.9A CN202210007508A CN114013431B CN 114013431 B CN114013431 B CN 114013431B CN 202210007508 A CN202210007508 A CN 202210007508A CN 114013431 B CN114013431 B CN 114013431B
Authority
CN
China
Prior art keywords
parking
intention
parking space
user
detection result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210007508.9A
Other languages
Chinese (zh)
Other versions
CN114013431A (en
Inventor
柯乐思
胡爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Joynext Technology Corp
Original Assignee
Ningbo Joynext Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Joynext Technology Corp filed Critical Ningbo Joynext Technology Corp
Priority to CN202210007508.9A priority Critical patent/CN114013431B/en
Publication of CN114013431A publication Critical patent/CN114013431A/en
Application granted granted Critical
Publication of CN114013431B publication Critical patent/CN114013431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/06Automatic manoeuvring for parking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/089Driver voice
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/21Voice
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/223Posture, e.g. hand, foot, or seat position, turned or inclined
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/50Barriers

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an automatic parking control method based on user intention, which comprises the steps of detecting voice of a user and generating a first intention detection result; detecting the human body posture of the user to generate an inclined parking direction; detecting a parking space environment to generate a parking space detection result; generating a second intention detection result according to the intention parking direction and the parking space detection result; and judging whether to trigger the automatic parking auxiliary system according to the first intention detection result and the second intention detection result so as to park the vehicle into an intention parking space. The method has the advantages that good experience feeling can be provided for a user, the user does not need to click on a vehicle-mounted screen to select the parking space, and only needs to say that keywords expressing parking intentions such as 'parking' and the like point to the direction needing to be parked at the same time, so that the vehicle can be parked in the selected parking space; by combining the parking intention with the human body posture detection, the automatic parking control system can be prevented from being triggered by mistake, and the robustness is improved.

Description

Automatic parking control method and system based on user intention
Technical Field
The invention relates to the field of automatic driving, in particular to an automatic parking control method and system based on user intention.
Background
With the rapid development of automobile electronic technology, the driving experience demand of users is gradually improved, and the intellectualization becomes the core competitiveness of automobiles. The automatic parking technology, one of the automobile intelligent technologies, greatly improves the experience of a user when parking, and brings good news to the user; however, in the automatic parking technology, the problem of how to select and confirm the parking space still needs to be improved.
In the prior art, there are some solutions on how to choose a confirmation of a parking space. Some solutions are to autonomously select a proper parking space through an automatic parking assist system: when the automatic parking auxiliary system detects that a plurality of proper parking spaces exist, the parking track of each parking space is calculated according to the geometric dimension of the detected parking space, and then the parking space which is most proper for parking is automatically selected through the control equipment; or selecting the idle parking space which is detected in the direction appointed by the vehicle steering information according to the current vehicle steering state; but the process that a user in the vehicle participates in decision making is lacked, and the personalized parking requirement of the user cannot be met. Other solutions are the autonomous selection of parking spaces by the user: displaying a plurality of appropriate parking spaces detected based on the panoramic image on a vehicle-mounted display, and simultaneously prompting a driver to park and select one of the parking spaces; or an electronic map interface is added, and a larger range of parking spaces to be selected is displayed on a vehicle-mounted screen; however, since the user is required to click on the vehicle-mounted display to determine to select a certain parking space, and the parking spaces displayed on the vehicle-mounted display are not all the parking spaces within the current sight range of the user, some parking spaces may be detected in the previous driving process and are not within the sight range of the user at this time, and a situation that the parking space finally selected by the user is already occupied by other vehicles occurs. Some other solutions are to determine the corresponding parking space by obtaining the human body posture information of the commander outside the vehicle, and cannot meet the personalized parking requirement of the user in the vehicle.
Therefore, there is a need for an automatic parking control method that can improve human-computer interaction experience and meet the individual parking requirement of the user to solve the above technical problems in the prior art.
Disclosure of Invention
In order to solve the above-mentioned problems, it is a primary objective of the present invention to provide a method and a system for controlling automatic parking based on user intention.
In order to achieve the above object, a first aspect of the present invention provides an automatic parking control method based on a user intention, the method including:
detecting the voice of the user to generate a first intention detection result;
detecting the human body posture of the user to generate an inclined parking direction;
detecting a parking space environment to generate a parking space detection result, wherein the parking space detection result comprises positions of one or more candidate parking spaces and states of the one or more candidate parking spaces;
generating a second intention detection result according to the intention parking direction and the parking space detection result, wherein the second intention detection result comprises the position of an intention parking space and the state of the intention parking space, and the intention parking space is contained in the one or more candidate parking spaces;
and judging whether to trigger an automatic parking auxiliary system according to the first intention detection result and the second intention detection result so as to park the vehicle into the intention parking space.
In some embodiments, the detecting the user speech and generating the first intention detection result includes:
collecting user voice and converting the user voice into corresponding text information;
detecting whether the text information contains preset keywords or not so as to generate a first intention detection result;
if the text information contains the preset keyword, the first intention detection result indicates that the user has the parking intention;
and if the text message does not contain the preset keyword, the first intention detection result indicates that the user does not have the parking intention.
In some embodiments, the detecting the human body posture of the user and generating the intention parking direction includes:
detecting the human body posture of the user based on a depth camera, and acquiring key nodes and three-dimensional coordinates corresponding to the key nodes;
selecting the key node and judging whether to generate a three-dimensional space straight line according to the three-dimensional coordinate corresponding to the key node;
and after the three-dimensional space straight line is detected, determining the intended parking direction according to the three-dimensional space straight line.
In some embodiments, the key nodes include a user arm key node and a user hand key node; the selecting the key node and judging whether to generate a three-dimensional space straight line according to the three-dimensional coordinate corresponding to the key node comprises the following steps:
selecting the user arm key node and judging the state of the user arm according to the three-dimensional coordinate corresponding to the user arm key node;
if the user arm state is a lifted state, selecting the user hand key node and judging the user finger state according to the three-dimensional coordinates corresponding to the user hand key node;
and if the user finger state has a pointing direction, selecting the user key node and generating the three-dimensional space straight line according to a three-dimensional coordinate corresponding to the user key node and the preset processing rule.
In some embodiments, the detecting the parking space environment and generating a parking space detection result, where the parking space detection result includes positions of one or more candidate parking spaces and states of the one or more candidate parking spaces, includes:
detecting the parking space environment based on a depth camera, and generating a parking space two-dimensional image and parking space depth information corresponding to the parking space environment;
according to the parking space two-dimensional image and a preset model, identifying the positions of the one or more candidate parking spaces and the states of the one or more candidate parking spaces in the parking space environment; and/or
And identifying the positions of the one or more candidate parking spaces and the states of the one or more candidate parking spaces in the parking space environment according to the calibration relation between the two-dimensional parking space image and the depth information of the parking space.
In some embodiments, before the detecting the human gesture of the user and generating the intended parking direction, the method further comprises:
and simultaneously acquiring the human body posture of the user and the parking space environment through the same depth camera, wherein the depth camera is arranged at the side rear part of a main driving position and a secondary driving position of the vehicle.
In some embodiments, the generating a second intention detection result according to the intention parking direction and the parking space detection result, the second intention detection result including a position of an intention parking space and a state of the intention parking space, includes:
and matching the intended parking direction with the positions of the one or more candidate parking spaces and the states of the one or more candidate parking spaces, and determining the positions of the intended parking spaces and the states of the intended parking spaces to generate the second intention detection result.
In some embodiments, the determining whether to trigger an automatic parking assist system to park the vehicle into the intended parking space according to the first intention detection result and the second intention detection result includes:
if the first intention detection result indicates that the user has the parking intention, continuously judging the state of the intention parking space;
if the intentional parking space is available for parking, generating a first prompt voice to prompt the intentional parking space to be available for parking and triggering the automatic parking auxiliary system to park the vehicle into the intentional parking space;
and if the intended parking space is occupied, generating a second prompt voice to prompt that the intended parking space is occupied.
In some embodiments, the method further comprises:
if the first intention detection result indicates that the user has the parking intention and does not generate the intention parking direction, generating a third prompt voice to prompt the user to select the intention parking direction.
In a second aspect, the present application provides an automatic parking control system based on user intention, the system including:
the detection processing module is used for detecting the voice of the user and judging whether the user has the parking intention;
the voice detection module is used for detecting the voice of the user and generating a first intention detection result;
the human body posture detection module is used for detecting the human body posture of the user and generating an inclined parking direction;
the parking space detection module is used for detecting a parking space environment and generating a parking space detection result, wherein the parking space detection result comprises the positions of one or more candidate parking spaces and the states of the one or more candidate parking spaces;
a parking space selection module, configured to generate a second intention detection result according to the intention parking direction and the parking space detection result, where the second intention detection result includes a position of an intention parking space and a state of the intention parking space, and the intention parking space is included in the one or more candidate parking spaces;
and the central processing module is used for judging whether to trigger an automatic parking auxiliary system according to the first intention detection result and the second intention detection result so as to park the vehicle into the intention parking space.
In a third aspect, the present application provides an electronic device, comprising:
one or more processors;
and memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
detecting the voice of the user to generate a first intention detection result;
detecting the human body posture of the user to generate an inclined parking direction;
detecting a parking space environment to generate a parking space detection result, wherein the parking space detection result comprises positions of one or more candidate parking spaces and states of the one or more candidate parking spaces;
generating a second intention detection result according to the intention parking direction and the parking space detection result, wherein the second intention detection result comprises the position of an intention parking space and the state of the intention parking space, and the intention parking space is contained in the one or more candidate parking spaces;
and judging whether to trigger an automatic parking auxiliary system according to the first intention detection result and the second intention detection result so as to park the vehicle into the intention parking space.
The beneficial effect that this application realized does:
the application provides an automatic parking control method based on user intention, which comprises the steps of detecting the voice of a user and generating a first intention detection result; detecting the human body posture of a user to generate an intentional parking direction; detecting a parking space environment to generate a parking space detection result, wherein the parking space detection result comprises the positions of one or more candidate parking spaces and the states of the one or more candidate parking spaces; generating a second intention detection result according to the intention parking direction and the parking space detection result, wherein the second intention detection result comprises the position of an intention parking space and the state of the intention parking space, and the intention parking space is contained in the one or more candidate parking spaces; and judging whether an automatic parking auxiliary system is triggered or not according to the first intention detection result and the second intention detection result so as to park the vehicle into the intention parking space.
According to the method and the device, the parking intention and the intended parking direction of the user are obtained by detecting the voice and the human body posture of the user, the position of the intended parking place selected by the user is determined by combining the detection result of the parking place, and when the parking place is in a state of parking, the automatic parking auxiliary system is triggered to park the vehicle into the intended parking place. The method can provide good experience for the user, and the user can park the vehicle into the selected parking space only by speaking the parking direction and pointing to the direction in which the vehicle is required to be parked without clicking the selected parking space on the vehicle-mounted screen; simultaneously, this application still detects the state at parking stall, and the effectual user of having avoided behind the selection parking stall of user but because this parking stall is occupied by the barrier that the user was not observed and lead to the problem that can't park in.
Furthermore, the application also provides a method for judging whether the user has the parking intention or not by detecting whether the corresponding text information contains the preset keyword or not after converting the collected voice into the text information, and the parking intention of the user can be accurately detected.
Further, the method and the device further provide that the human body posture of the user is detected based on the depth camera so as to obtain three-dimensional coordinates of arm key nodes and finger key nodes; whether the arm of the user is lifted and the finger of the user points is sequentially judged, and only when the arm of the user is lifted and the finger of the user points is detected, a three-dimensional space straight line is generated to reduce unnecessary resource waste; meanwhile, the accuracy of judgment and the accuracy of the generated three-dimensional space straight line can be improved based on the three-dimensional coordinates acquired by the depth camera.
Furthermore, the method and the device for recognizing the candidate parking space position and the candidate parking space state based on the parking space two-dimensional image and the parking space depth information generated by the depth camera can improve the accuracy of the recognized candidate parking space position and the candidate parking space state and improve the robustness.
Further, this application has still provided and has acquireed human gesture and parking stall environment simultaneously through same degree of depth camera, reduce cost.
Furthermore, the method and the device further provide that whether the user has the parking intention is judged at first, the state of the intentional parking space is continuously judged after the user has the parking intention, and the vehicle is parked in the intentional parking space only when the intentional parking space is in a parking available state, so that the problem that the automatic parking auxiliary system is triggered by mistake possibly can be avoided efficiently and quickly. When the intention parking space is occupied, voice is generated to remind the user that the intention parking space is occupied, man-machine interaction is improved, user experience is improved, and possible boring emotions of the user when the vehicle cannot be parked in the intention parking space are avoided.
Furthermore, the method and the device further provide that when the intention of the user is detected but the intention parking direction is not detected, voice is generated to prompt the user to select the intention parking direction, and the problem that automatic parking control operation cannot be performed due to the fact that the user is unfamiliar with operation requirements is solved.
All products of this application need not have all of the above-described effects.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
fig. 1 is an architecture diagram of a parking space autonomous selection system provided in an embodiment of the present application;
fig. 2 is a flowchart of an algorithm of the parking space autonomous selection system provided in the embodiment of the present application;
FIG. 3 is a schematic diagram of an application scenario provided by an embodiment of the present application;
FIG. 4 is a flowchart of an automatic parking control method provided in the embodiment of the present application;
fig. 5 is a structural diagram of an automatic parking control system according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be understood that throughout the description and claims of this application, unless the context clearly requires otherwise, the words "comprise", "comprising", and the like, are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, what is meant is "including, but not limited to".
It will be further understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present application, the meaning of "a plurality" is two or more unless otherwise specified.
It should be noted that the terms "S1", "S2", etc. are used for descriptive purposes only, are not intended to be used in a specific sense to refer to an order or sequence, and are not intended to limit the present application, but are merely used for convenience in describing the methods of the present application and are not to be construed as indicating the order of the steps. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.
As described in the background art, in the prior art, a part of parking spaces are directly and automatically selected by an automatic parking assist system, a decision-making process of a user in a vehicle is lacked, the personalized parking requirement of the user cannot be met, and a part of parking spaces can provide a decision-making opportunity for the user, but still need a person outside the vehicle to perform a cooperative command.
In order to solve the technical problems, the application provides an automatic parking control method and system based on user intention, which are applied to people in a vehicle, and the method and system are used for confirming the intention parking direction by recognizing the posture of a human body and combining the position and the state of a parking space when a user has the intention to park, and triggering an automatic parking auxiliary system to park the vehicle in the intention parking space after determining that the parking space selected by the user can be parked, so that the experience of the user during parking is improved.
Example one
In order to implement the automatic parking control method based on the user intention disclosed in the present application, referring to fig. 1, an embodiment of the present application provides a parking space autonomous selection system, which includes a camera module, a microphone module, a speaker module, a human body posture recognition module, a parking space detection module, a voice generation module, a central logic module, a parking space selection module, a 3D straight line calculation module, and an automatic parking assist system. Specifically, referring to fig. 2, the process of performing the automatic parking control operation by using the parking space autonomous selection system disclosed in this embodiment includes:
s100, collecting voice of a user and judging whether the user has parking intention.
The microphone module collects voice input of a user through a microphone, converts the voice of the user into corresponding characters and sends the characters to the voice detection module; the voice detection module detects whether the received characters contain keywords such as 'stop', 'park', 'stop there' and the like, which can indicate the parking intention of the vehicle. If the keywords are detected, judging that the user has the parking intention; if the keywords are not detected, the user is judged not to have the parking intention. When the user has the parking intention, continuing to execute the subsequent operation; when the user does not have the intention of parking, the subsequent operation is not executed, and the state of preparation is entered to continue to detect the user voice.
S200, detecting the human body posture of the user to determine the selected intention parking direction of the user.
The process of determining the parking direction selected by the user may specifically include:
s210, detecting the human body posture of the user based on a depth camera, and acquiring key nodes and corresponding three-dimensional coordinates of the user;
specifically, the human body posture recognition module detects a human body posture of a user according to the depth camera, generates a two-dimensional image and depth information corresponding to the human body posture of the user, and acquires user key nodes in the two-dimensional image corresponding to the human body posture of the user through an image recognition technology, wherein the user key nodes comprise user arm key nodes and user hand key nodes. Preferably, the convolution neural network technology can be used for recognizing the two-dimensional image corresponding to the human body posture of the user so as to obtain the key nodes of the arm and the hand of the user, and the image recognition technology applied specifically is not limited. After the key nodes of the arms and the key nodes of the hands of the user are obtained, the human body posture recognition module maps the key nodes of the arms and the key nodes of the hands of the user recognized in the two-dimensional image into the depth information by utilizing the calibration relation between the two-dimensional image of the depth camera and the depth information so as to obtain the three-dimensional coordinates of the key nodes of the arms and the hands of the user.
The depth camera may be a camera capable of acquiring depth information and a two-dimensional image simultaneously, specifically, may be an RGB-D camera, a binocular camera, a TOF (time of flight) depth camera, other cameras capable of acquiring depth information and a two-dimensional image simultaneously, and the like, which may appear in the future, and the present application does not limit this. In addition, it is noted that the user body postures include the main driving body posture and the auxiliary driving body posture because the parking space may be selected according to the suggestion of the auxiliary driving sometimes, and therefore whether the auxiliary driving makes the corresponding parking body posture is also considered.
S220, judging the arm state of the user according to the key node of the arm of the user and the corresponding three-dimensional coordinate;
specifically, the human body posture identification module selects a plurality of user arm key nodes, calculates the geometric angle relationship among the user arm key nodes to judge whether the user arm is lifted, wherein the user arm key nodes comprise shoulder joint nodes, elbow nodes, wrist nodes and other nodes on the arm. If the arm of the user is detected to be lifted, continuing to execute subsequent operations to generate a three-dimensional space straight line; if the arm of the user is not lifted, namely the user does not point to the parking space, further explaining that the user does not have the parking intention, the subsequent operation is not executed, and the user enters a preparation state to continuously detect the human body posture of the user, so that the unnecessary resource expenditure is reduced.
S230, judging the finger state of the user according to the three-dimensional coordinates corresponding to the key nodes of the hand of the user;
similarly, the human body posture identification module selects a plurality of key nodes of the user hand and calculates the geometric angle relationship among the key nodes of the user hand so as to judge whether the user finger is pointed; the key nodes of the hand of the user refer to a plurality of nodes on the whole palm and fingers. If the finger of the user is detected to have the pointing direction, continuing to execute subsequent operations to generate a three-dimensional space straight line; if the finger of the user is not pointed, namely the user does not have the parking intention, the subsequent operation is not executed, and the user enters a preparation state to continuously detect the human body posture of the user, so that unnecessary resource expenditure is reduced.
S240, when the pointing direction of the finger of the user is detected, selecting a key node of the user and generating a three-dimensional space straight line according to the corresponding three-dimensional coordinate and a preset processing rule;
specifically, since the user key nodes include user arm key nodes and user hand key nodes, when the 3D straight line calculation module selects the user key nodes to generate three-dimensional straight lines, the three-dimensional straight lines may be all selected from the hand key nodes, or all selected from the arm key nodes, or both selected from the arm key nodes and the hand key nodes; for example, a fingertip node of a finger and a palm node of the finger, a fingertip node and a wrist node of the finger, an elbow node and a fingertip node and the like can be selected. If only two user key nodes are selected, a three-dimensional space straight line can be directly determined according to the three-dimensional coordinates of the two points; if more than two key nodes are selected, a three-dimensional space straight line can be obtained by utilizing least square fitting.
And S250, determining the intention parking direction according to the three-dimensional space straight line after the three-dimensional space straight line is detected.
Deriving a three-dimensional space straight line in the space, wherein the direction pointed by the three-dimensional space straight line is the intentional parking direction selected by the user.
And S300, detecting the parking space environment to generate a parking space detection result.
The parking space detection module acquires corresponding parking space two-dimensional images and parking space depth information based on a parking space environment in front of a vehicle detected by the depth camera so as to generate positions of one or more candidate parking spaces and states of the one or more candidate parking spaces. Specifically, the convolutional neural network model may be used to detect and identify the positions of candidate parking spaces in the two-dimensional image and the states of the candidate parking spaces (i.e., whether the parking space is occupied or not); and performing three-dimensional space detection and identification on one or more candidate parking spaces existing in the environment in front of the vehicle according to the calibration relation between the two-dimensional image of the parking space and the depth information of the parking space, and acquiring the three-dimensional space positions of the candidate parking spaces (namely the positions of the candidate parking spaces) and the states of the candidate parking spaces.
It is noted that in implementation, one depth camera may be used and installed laterally behind the primary driver and the secondary driver, so that it can detect the human body postures of the primary driver and the secondary driver and the parking space environment in front of the vehicle at the same time; the depth camera may be installed in front of the main driver and the assistant driver, or may be installed in front of the main driver and the assistant driver to detect human postures of the main driver and the assistant driver, and the depth camera may be installed outside the vehicle to detect a parking space in front of the vehicle, where the specific installation position may be a roof, a rearview mirror, and the like, which is not limited in the present application.
Fig. 3 is a schematic view of an application scenario of the present application, and it should be understood by those skilled in the art that step S100, step S200, and step S300 are executed synchronously, and there is no precedence order.
And S400, determining the position of the intention parking space.
After the intended parking direction is determined, the position of the parking space corresponding to the intended parking direction selected by the user cannot be known, and the parking space selection module matches the intended parking direction with the positions and the states of all candidate parking spaces detected by the parking space detection module to determine the position and the state of the intended parking space.
S500, judging whether to trigger the automatic parking auxiliary system according to the position and the state of the intended parking space and the parking intention of the user so as to park the vehicle in the intended parking space.
The central logic module receives the intention parking direction, the judgment result of the parking intention of the user, the position information and the state information of the intention parking space. Firstly, judging whether a user has a parking intention, and when the user does not have the parking intention, the central logic module does not perform subsequent judgment operation on the state of an intention parking space; when the user has the intention of parking, the central logic module continues to judge the state of the intention parking space. If the state of the intentional parking space is parking available, the central logic module feeds back the state information of the intentional parking space to the voice generation module, and the voice generation module generates a first prompt voice to prompt a user that the parking space is available for parking, for example, "the parking space is available for parking, and you are parking now", and the first prompt voice is played through the loudspeaker module and fed back to the user; and simultaneously, the central logic module sends the position of the intentional parking space to the automatic parking auxiliary system, and the automatic parking auxiliary system automatically parks the vehicle in the position of the intentional parking space. If the intended parking space is occupied, the central logic module feeds back the state information of the intended parking space to the voice generation module, the voice generation module generates a second prompt voice to prompt a user that the intended parking space is occupied, and if the parking space is occupied, the user asks to select other parking spaces, and the second prompt voice is played and fed back to the user through the loudspeaker module.
In addition, there is a case that the user has a parking intention but does not generate an intended parking direction, at this time, the central logic module feeds back information of the direction without parking intention to the voice generation module, and the voice generation module generates a third prompt voice to prompt the user to select the intended parking direction, for example, "please select a parking space, which is you parking automatically".
Based on the automatic parking control method based on the user intention, the parking space automatic selection system can effectively improve the human-computer interaction experience of the user for selecting the parking space, reduce the problems of mistakenly identifying the parking intention and mistakenly triggering the automatic parking system, and improve the robustness.
Example two
In correspondence to the above-described embodiments, the present application provides an automatic parking control method based on a user intention, as shown in fig. 4, the method including:
4100. detecting the voice of the user to generate a first intention detection result;
preferably, the detecting the user voice to generate the first intention detection result includes:
4110. collecting user voice and converting the user voice into corresponding text information;
4120. detecting whether the text information contains preset keywords or not so as to generate a first intention detection result;
4130. if the text information contains the preset keyword, the first intention detection result indicates that the user has the parking intention;
4140. and if the text message does not contain the preset keyword, the first intention detection result indicates that the user does not have the parking intention.
4200. Detecting the human body posture of the user to generate an inclined parking direction;
preferably, the detecting the human body posture of the user and generating the intended parking direction includes:
4210. detecting the human body posture of the user based on a depth camera, and acquiring key nodes and three-dimensional coordinates corresponding to the key nodes;
4220. selecting the key node and judging whether to generate a three-dimensional space straight line according to the three-dimensional coordinate corresponding to the key node;
preferably, the key nodes comprise user arm key nodes and user hand key nodes; the selecting the key node and judging whether to generate a three-dimensional space straight line according to the three-dimensional coordinate corresponding to the key node comprises the following steps:
4221. selecting the user arm key node and judging the state of the user arm according to the three-dimensional coordinate corresponding to the user arm key node;
4222. if the user arm state is a lifted state, selecting the user hand key node and judging the user finger state according to the three-dimensional coordinates corresponding to the user hand key node;
4223. and if the user finger state has a pointing direction, selecting the user key node and generating the three-dimensional space straight line according to the three-dimensional coordinate corresponding to the user key node and the preset processing rule.
4230. And after the three-dimensional space straight line is detected, determining the intended parking direction according to the three-dimensional space straight line.
Preferably, before the detecting the human body posture of the user and generating the intended parking direction, the method further includes:
4240. and simultaneously acquiring the human body posture of the user and the parking space environment through the same depth camera, wherein the depth camera is arranged at the side rear part of a main driving position and a secondary driving position of the vehicle.
4300. Detecting a parking space environment to generate a parking space detection result, wherein the parking space detection result comprises positions of one or more candidate parking spaces and states of the one or more candidate parking spaces;
preferably, the detecting a parking space environment to generate a parking space detection result, where the parking space detection result includes positions of one or more candidate parking spaces and states of the one or more candidate parking spaces, includes:
4310. detecting the parking space environment based on a depth camera, and generating a parking space two-dimensional image and parking space depth information corresponding to the parking space environment;
4320. according to the parking space two-dimensional image and a preset model, identifying the positions of the one or more candidate parking spaces and the states of the one or more candidate parking spaces in the parking space environment; and/or
4330. And identifying the positions of the one or more candidate parking spaces and the states of the one or more candidate parking spaces in the parking space environment according to the calibration relation between the two-dimensional parking space image and the depth information of the parking space.
4400. Generating a second intention detection result according to the intention parking direction and the parking space detection result, wherein the second intention detection result comprises the position of an intention parking space and the state of the intention parking space, and the intention parking space is contained in the one or more candidate parking spaces;
preferably, the generating a second intention detection result according to the intention parking direction and the parking space detection result, the second intention detection result including a position of an intention parking space and a state of the intention parking space, includes:
4410. and matching the intended parking direction with the positions of the one or more candidate parking spaces and the states of the one or more candidate parking spaces, and determining the positions of the intended parking spaces and the states of the intended parking spaces to generate the second intention detection result.
4500. And judging whether to trigger an automatic parking auxiliary system according to the first intention detection result and the second intention detection result so as to park the vehicle into the intention parking space.
Preferably, the determining whether to trigger an automatic parking assist system to park the vehicle into the intended parking space according to the first intention detection result and the second intention detection result includes:
4510. if the first intention detection result indicates that the user has the parking intention, the state of the intention parking space is continuously judged;
4520. if the intention parking space is available for parking, generating a first prompt voice to prompt the intention parking space to be available for parking, and triggering the automatic parking auxiliary system to park the vehicle in the intention parking space;
4530. and if the intended parking space is occupied, generating a second prompt voice to prompt that the intended parking space is occupied.
Preferably, the method further comprises:
4540. if the first intention detection result indicates that the user has the parking intention and does not generate the intention parking direction, generating a third prompt voice to prompt the user to select the intention parking direction.
EXAMPLE III
In accordance with the first and second embodiments, the present application provides an automatic parking control system based on user intention, as shown in fig. 5, the system includes:
a voice detection module 510, configured to detect a voice of a user and generate a first intention detection result;
a human body posture detection module 520, configured to detect a human body posture of the user and generate an intended parking direction;
a parking space detection module 530, configured to detect a parking space environment and generate a parking space detection result, where the parking space detection result includes positions of one or more candidate parking spaces and states of the one or more candidate parking spaces;
a parking space selection module 540, configured to generate a second intention detection result according to the intention parking direction and the parking space detection result, where the second intention detection result includes a position of an intention parking space and a state of the intention parking space, where the intention parking space is included in the one or more candidate parking spaces;
the central processing module 550 is configured to determine whether to trigger an automatic parking assist system according to the first intention detection result and the second intention detection result, so as to park the vehicle into the intended parking space.
Preferably, the voice detection module 510 is further configured to collect a user voice and convert the user voice into corresponding text information; detecting whether the text information contains preset keywords or not so as to generate a first intention detection result; if the text information contains the preset keyword, the first intention detection result indicates that the user has the parking intention; if the preset keyword is not contained in the text information, the first intention detection result indicates that the user does not have the parking intention.
Preferably, the human body posture detecting module 520 is further configured to detect the human body posture of the user based on a depth camera, and obtain a key node and a three-dimensional coordinate corresponding to the key node; selecting the key node and judging whether to generate a three-dimensional space straight line according to the three-dimensional coordinate corresponding to the key node; and after the three-dimensional space straight line is detected, determining the intentional parking direction according to the three-dimensional space straight line.
Preferably, the human body posture detection module 520 is further configured to select the user arm key node and determine the user arm state according to the three-dimensional coordinate corresponding to the user arm key node; if the user arm state is a lifted state, selecting the user hand key node and judging the user finger state according to the three-dimensional coordinates corresponding to the user hand key node; and if the user finger state has a pointing direction, selecting the user key node and generating the three-dimensional space straight line according to the three-dimensional coordinate corresponding to the user key node and the preset processing rule.
Preferably, the parking space detection module 530 is further configured to detect the parking space environment based on a depth camera, and generate a parking space two-dimensional image and parking space depth information corresponding to the parking space environment; according to the parking space two-dimensional image and a preset model, identifying the positions of the one or more candidate parking spaces and the states of the one or more candidate parking spaces in the parking space environment; and/or identifying the positions of the one or more candidate parking spaces and the states of the one or more candidate parking spaces in the parking space environment according to the calibration relation between the two-dimensional parking space image and the depth information of the parking space.
Preferably, the parking space detection module 530 is further configured to obtain the human body posture of the user and the parking space environment simultaneously through the same depth camera, where the depth camera is installed behind and beside a primary driving seat and a secondary driving seat of the vehicle.
Preferably, the parking space selection module 540 is further configured to match the intended parking direction with the positions of the one or more candidate parking spaces and the states of the one or more candidate parking spaces, and determine the position of the intended parking space and the state of the intended parking space, so as to generate the second intention detection result.
Preferably, the central processing module 550 is further configured to: if the first intention detection result indicates that the user has the parking intention, the state of the intention parking space is continuously judged; if the intention parking space is available for parking, generating a first prompt voice to prompt the intention parking space to be available for parking, and triggering the automatic parking auxiliary system to park the vehicle in the intention parking space; and if the intentional parking space is in an occupied state, generating a second prompt voice to prompt that the intentional parking space is occupied.
Preferably, the central processing module 550 is further configured to: if the first intention detection result indicates that the user has the parking intention and does not generate the intention parking direction, generating a third prompt voice to prompt the user to select the intention parking direction.
Example four
Corresponding to all the above embodiments, an embodiment of the present application provides an electronic device, including: one or more processors; and memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
detecting the voice of the user to generate a first intention detection result;
detecting the human body posture of the user to generate an inclined parking direction;
detecting a parking space environment to generate a parking space detection result, wherein the parking space detection result comprises positions of one or more candidate parking spaces and states of the one or more candidate parking spaces;
generating a second intention detection result according to the intention parking direction and the parking space detection result, wherein the second intention detection result comprises the position of an intention parking space and the state of the intention parking space, and the intention parking space is contained in the one or more candidate parking spaces;
and judging whether to trigger an automatic parking auxiliary system according to the first intention detection result and the second intention detection result so as to park the vehicle into the intention parking space.
Fig. 6 illustrates an architecture of an electronic device, which may specifically include a processor 610, a video display adapter 611, a disk drive 612, an input/output interface 613, a network interface 614, and a memory 620. The processor 610, the video display adapter 611, the disk drive 612, the input/output interface 613, the network interface 614, and the memory 620 may be communicatively connected by a bus 630.
The processor 610 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solution provided in the present Application.
The Memory 620 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 620 may store an operating system 621 for controlling the operation of the electronic device 600, a Basic Input Output System (BIOS)622 for controlling low-level operations of the electronic device 600. In addition, a web browser 623, a data storage management system 624, an icon font processing system 625, and the like may also be stored. The icon font processing system 625 may be an application program that implements the operations of the foregoing steps in this embodiment of the application. In summary, when the technical solution provided in the present application is implemented by software or firmware, the relevant program codes are stored in the memory 620 and called for execution by the processor 610.
The input/output interface 613 is used for connecting an input/output module to realize information input and output. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The network interface 614 is used for connecting a communication module (not shown in the figure) to realize the communication interaction between the device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 630 includes a path that transfers information between the various components of the device, such as processor 610, video display adapter 611, disk drive 612, input/output interface 613, network interface 614, and memory 620.
In addition, the electronic device 600 may also obtain information of specific pickup conditions from the virtual resource object pickup condition information database for performing condition judgment, and the like.
It should be noted that although the above devices only show the processor 610, the video display adapter 611, the disk drive 612, the input/output interface 613, the network interface 614, the memory 620, the bus 630, etc., in a specific implementation, the device may also include other components necessary for normal operation. Furthermore, it will be understood by those skilled in the art that the apparatus described above may also include only the components necessary to implement the solution of the present application, and not necessarily all of the components shown in the figures.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, or the like, and includes several instructions for enabling a computer device (which may be a personal computer, a cloud server, or a network device) to execute the method according to the embodiments or some portions of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments, which are substantially similar to the method embodiments, are described in a relatively simple manner, and reference may be made to some descriptions of the method embodiments for relevant points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (7)

1. An automatic parking control method based on user intention, characterized by comprising:
detecting the voice of the user to generate a first intention detection result;
detecting the human body posture of the user to generate an inclined parking direction;
detecting a parking space environment to generate a parking space detection result, wherein the parking space detection result comprises the positions of one or more candidate parking spaces and the states of the one or more candidate parking spaces;
generating a second intention detection result according to the intention parking direction and the parking space detection result, wherein the second intention detection result comprises the position of an intention parking space and the state of the intention parking space, and the intention parking space is contained in the one or more candidate parking spaces;
judging whether an automatic parking auxiliary system is triggered according to the first intention detection result and the second intention detection result so as to park the vehicle into the intention parking space;
the detecting a parking space environment to generate a parking space detection result, where the parking space detection result includes positions of one or more candidate parking spaces and states of the one or more candidate parking spaces, includes:
detecting the parking space environment based on a depth camera, and generating a parking space two-dimensional image and parking space depth information corresponding to the parking space environment;
according to the parking space two-dimensional image and a preset model, identifying the positions of the one or more candidate parking spaces and the states of the one or more candidate parking spaces in the parking space environment; and/or
According to the calibration relation between the two-dimensional parking space image and the depth information of the parking space, identifying the positions of the one or more candidate parking spaces and the states of the one or more candidate parking spaces in the parking space environment;
wherein, according to the first intention detection result and the second intention detection result, determining whether to trigger an automatic parking assist system to park the vehicle into the intention parking space, comprises:
if the first intention detection result indicates that the user has the parking intention, continuously judging the state of the intention parking space;
if the intention parking space is available for parking, generating a first prompt voice to prompt the intention parking space to be available for parking, and triggering the automatic parking auxiliary system to park the vehicle in the intention parking space;
if the intentional parking space is occupied, generating a second prompt voice to prompt that the intentional parking space is occupied;
wherein, before detecting the human body posture of the user and generating the intention parking direction, the method further comprises the following steps:
the human body posture of the user and the parking space environment are simultaneously acquired through the same depth camera, and the depth camera is installed behind the main driving position and the auxiliary driving position of the vehicle.
2. The method of claim 1, wherein detecting the user speech to generate the first intent detection result comprises:
collecting user voice and converting the user voice into corresponding text information;
detecting whether the text information contains preset keywords or not so as to generate a first intention detection result;
if the text information contains the preset keyword, the first intention detection result indicates that the user has a parking intention;
and if the text message does not contain the preset keyword, the first intention detection result indicates that the user does not have the parking intention.
3. The method of claim 1, wherein the detecting the human gesture of the user and generating the intended parking direction comprises:
detecting the human body posture of the user based on a depth camera, and acquiring key nodes and three-dimensional coordinates corresponding to the key nodes;
selecting the key node and judging whether to generate a three-dimensional space straight line according to the three-dimensional coordinate corresponding to the key node;
and after the three-dimensional space straight line is detected, determining the intentional parking direction according to the three-dimensional space straight line.
4. The method of claim 3, wherein the key nodes comprise user arm key nodes and user hand key nodes; the selecting the key node and judging whether to generate a three-dimensional space straight line according to the three-dimensional coordinate corresponding to the key node comprises the following steps:
selecting the user arm key node and judging the state of the user arm according to the three-dimensional coordinate corresponding to the user arm key node;
if the user arm state is a lifted state, selecting the user hand key node and judging the user finger state according to the three-dimensional coordinates corresponding to the user hand key node;
and if the user finger state has a pointing direction, selecting the key node and generating the three-dimensional space straight line according to the three-dimensional coordinate corresponding to the key node and a preset processing rule.
5. The method according to any of claims 1-4, wherein said generating a second intent detection result based on said intended direction of parking and said parking space detection result, said second intent detection result including a location of an intended parking space and a status of said intended parking space, comprises:
and matching the intended parking direction with the positions of the one or more candidate parking spaces and the states of the one or more candidate parking spaces, and determining the positions of the intended parking spaces and the states of the intended parking spaces to generate the second intention detection result.
6. The method of claim 2, further comprising:
if the first intention detection result indicates that the user has the parking intention and does not generate the intention parking direction, generating a third prompt voice to prompt the user to select the intention parking direction.
7. An automatic parking control system based on user intention, characterized by comprising:
the voice detection module is used for detecting the voice of the user and generating a first intention detection result;
the human body posture detection module is used for detecting the human body posture of the user and generating an intentional parking direction;
the parking space detection module is used for detecting a parking space environment and generating a parking space detection result, wherein the parking space detection result comprises the positions of one or more candidate parking spaces and the states of the one or more candidate parking spaces;
a parking space selection module, configured to generate a second intention detection result according to the intention parking direction and the parking space detection result, where the second intention detection result includes a position of an intention parking space and a state of the intention parking space, and the intention parking space is included in the one or more candidate parking spaces;
the central processing module is used for judging whether an automatic parking auxiliary system is triggered or not according to the first intention detection result and the second intention detection result so as to park the vehicle into the intention parking space;
the parking space detection module is further used for detecting the parking space environment based on a depth camera and generating a parking space two-dimensional image and parking space depth information corresponding to the parking space environment;
the parking space detection module is further used for identifying the positions of the one or more candidate parking spaces and the states of the one or more candidate parking spaces in the parking space environment according to the parking space two-dimensional image and a preset model; and/or
Identifying the position of the one or more candidate parking spaces and the state of the one or more candidate parking spaces in the parking space environment according to the calibration relation between the parking space two-dimensional image and the parking space depth information;
the central processing module is further used for continuously judging the state of the intended parking space when the first intention detection result indicates that the user has the parking intention;
the voice generating module is used for generating a first prompting voice to prompt that the intended parking space can be parked when the state of the intended parking space is parking available, and triggering the automatic parking auxiliary system to park the vehicle into the intended parking space;
the voice generating module is further used for generating a second prompting voice to prompt that the intended parking space is occupied when the state of the intended parking space is occupied;
the human body posture detection module and the parking space detection module simultaneously acquire the human body posture of the user and the parking space environment through the same depth camera, and the depth camera is installed behind the main driving position and the auxiliary driving position of the vehicle.
CN202210007508.9A 2022-01-06 2022-01-06 Automatic parking control method and system based on user intention Active CN114013431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210007508.9A CN114013431B (en) 2022-01-06 2022-01-06 Automatic parking control method and system based on user intention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210007508.9A CN114013431B (en) 2022-01-06 2022-01-06 Automatic parking control method and system based on user intention

Publications (2)

Publication Number Publication Date
CN114013431A CN114013431A (en) 2022-02-08
CN114013431B true CN114013431B (en) 2022-06-17

Family

ID=80069750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210007508.9A Active CN114013431B (en) 2022-01-06 2022-01-06 Automatic parking control method and system based on user intention

Country Status (1)

Country Link
CN (1) CN114013431B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115158295A (en) * 2022-06-29 2022-10-11 重庆长安汽车股份有限公司 Parking control method and device for vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102303604A (en) * 2011-06-29 2012-01-04 广东好帮手电子科技股份有限公司 Automatic parking system
CN104527642A (en) * 2014-12-31 2015-04-22 江苏大学 Automatic parking system and method based on scene diversity identification
CN113320474A (en) * 2021-07-08 2021-08-31 长沙立中汽车设计开发股份有限公司 Automatic parking method and device based on panoramic image and human-computer interaction
CN113744560A (en) * 2021-09-15 2021-12-03 厦门科拓通讯技术股份有限公司 Automatic parking method and device for parking lot, server and machine-readable storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10166995B2 (en) * 2016-01-08 2019-01-01 Ford Global Technologies, Llc System and method for feature activation via gesture recognition and voice command
DE102016011916A1 (en) * 2016-10-05 2017-06-01 Daimler Ag Method for carrying out an automatic parking operation of a motor vehicle
CN109109857A (en) * 2018-09-05 2019-01-01 深圳普思英察科技有限公司 A kind of unmanned vendors' cart and its parking method and device
CN109703554B (en) * 2019-02-27 2020-11-24 湖北亿咖通科技有限公司 Parking space confirmation method and device
CN110871792A (en) * 2019-12-20 2020-03-10 奇瑞汽车股份有限公司 Automatic parking control system and method
CN112158192A (en) * 2020-06-24 2021-01-01 上汽通用五菱汽车股份有限公司 Parking control method and parking control system
CN112241204B (en) * 2020-12-17 2021-08-27 宁波均联智行科技股份有限公司 Gesture interaction method and system of vehicle-mounted AR-HUD
CN113243886B (en) * 2021-06-11 2021-11-09 四川翼飞视科技有限公司 Vision detection system and method based on deep learning and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102303604A (en) * 2011-06-29 2012-01-04 广东好帮手电子科技股份有限公司 Automatic parking system
CN104527642A (en) * 2014-12-31 2015-04-22 江苏大学 Automatic parking system and method based on scene diversity identification
CN113320474A (en) * 2021-07-08 2021-08-31 长沙立中汽车设计开发股份有限公司 Automatic parking method and device based on panoramic image and human-computer interaction
CN113744560A (en) * 2021-09-15 2021-12-03 厦门科拓通讯技术股份有限公司 Automatic parking method and device for parking lot, server and machine-readable storage medium

Also Published As

Publication number Publication date
CN114013431A (en) 2022-02-08

Similar Documents

Publication Publication Date Title
EP3491493B1 (en) Gesture based control of autonomous vehicles
US8914163B2 (en) System and method for incorporating gesture and voice recognition into a single system
US8370163B2 (en) Processing user input in accordance with input types accepted by an application
US9613459B2 (en) System and method for in-vehicle interaction
US20140058584A1 (en) System And Method For Multimodal Interaction With Reduced Distraction In Operating Vehicles
EP4361771A1 (en) Gesture recognition method and apparatus, system, and vehicle
EP3909028A1 (en) Multimodal user interface for a vehicle
CN104471353A (en) Low-attention gestural user interface
CN111737670B (en) Method, system and vehicle-mounted multimedia device for multi-mode data collaborative man-machine interaction
EP2972687A1 (en) System and method for transitioning between operational modes of an in-vehicle device using gestures
US20200218488A1 (en) Multimodal input processing for vehicle computer
CN114013431B (en) Automatic parking control method and system based on user intention
WO2018061603A1 (en) Gestural manipulation system, gestural manipulation method, and program
CN113835570B (en) Control method, device, equipment, storage medium and program for display screen in vehicle
CN113799698A (en) Method, device and equipment for adjusting interior rearview mirror and storage medium
CN106598422B (en) hybrid control method, control system and electronic equipment
CN110705483B (en) Driving reminding method, device, terminal and storage medium
CN111638786A (en) Display control method, device and equipment of vehicle-mounted rear projection display system and storage medium
CN115793852A (en) Method for acquiring operation indication based on cabin area, display method and related equipment
CN108427392B (en) Interface control method and diagnostic equipment
CN115617232A (en) Remote screen control method, vehicle and computer readable storage medium
US11535268B2 (en) Vehicle and control method thereof
KR20150067679A (en) System and method for gesture recognition of vehicle
CN112074801A (en) Method and user interface for detecting input through a pointing gesture
CN116069155A (en) Display control method and device for user interface, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant