CN113696849B - Gesture-based vehicle control method, device and storage medium - Google Patents

Gesture-based vehicle control method, device and storage medium Download PDF

Info

Publication number
CN113696849B
CN113696849B CN202110993038.3A CN202110993038A CN113696849B CN 113696849 B CN113696849 B CN 113696849B CN 202110993038 A CN202110993038 A CN 202110993038A CN 113696849 B CN113696849 B CN 113696849B
Authority
CN
China
Prior art keywords
gesture
real
target
vehicle control
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110993038.3A
Other languages
Chinese (zh)
Other versions
CN113696849A (en
Inventor
朱鹤群
胡晓健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiayu Intelligent Technology Co ltd
Original Assignee
Shanghai Xianta Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xianta Intelligent Technology Co Ltd filed Critical Shanghai Xianta Intelligent Technology Co Ltd
Priority to CN202110993038.3A priority Critical patent/CN113696849B/en
Publication of CN113696849A publication Critical patent/CN113696849A/en
Application granted granted Critical
Publication of CN113696849B publication Critical patent/CN113696849B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/20Means to switch the anti-theft system on or off
    • B60R25/2045Means to switch the anti-theft system on or off by hand gestures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • B60R16/0231Circuits relating to the driving or the functioning of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/01Fittings or systems for preventing or indicating unauthorised use or theft of vehicles operating on vehicle systems or fittings, e.g. on doors, seats or windscreens
    • B60R25/04Fittings or systems for preventing or indicating unauthorised use or theft of vehicles operating on vehicle systems or fittings, e.g. on doors, seats or windscreens operating on the propulsion system, e.g. engine or drive motor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/20Means to switch the anti-theft system on or off
    • B60R25/25Means to switch the anti-theft system on or off using biometry
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/20Means to switch the anti-theft system on or off
    • B60R25/25Means to switch the anti-theft system on or off using biometry
    • B60R25/257Voice recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention provides a gesture-based vehicle control method, a gesture-based vehicle control device and a storage medium. A gesture-based vehicle control method, comprising: acquiring a first image acquired by a camera module loaded on a target vehicle, wherein a first real-time gesture of a target user is recorded in the first image; based on the first image, searching a target gesture matched with the first real-time gesture in the user-defined gesture corresponding to the target user; and based on a pre-stored mapping relation, searching a vehicle control result corresponding to the target gesture action, and controlling the target vehicle based on the corresponding vehicle control result, wherein the mapping relation characterizes the mapping relation between the gesture action and the vehicle control result. By adopting the method, the vehicle can be controlled based on the gesture, the accuracy of vehicle control can be improved, the safety can be improved, and the user experience can be improved.

Description

Gesture-based vehicle control method, device and storage medium
Technical Field
The present invention relates to the field of vehicle control technologies, and in particular, to a gesture-based vehicle control method, device, and storage medium.
Background
With the continuous development of technology, in addition to the traditional manual vehicle control method, the application of automatic vehicle control technology is more and more common in the market, such as automatic driving, automatic parking, automatic unlocking and the like.
In the prior art, taking the unlocking of a vehicle as an example, an image can be acquired through a camera, if the acquired image comprises the body shadow of a user, the vehicle can be unlocked, but the unlocking accuracy by adopting the method is not high, people other than the vehicle owner around the vehicle can be acquired, and if the vehicle is unlocked at the moment, huge potential safety hazards exist. Also, if the user desires to perform other operations such as ignition, turning on an air conditioner, turning on music, etc., in addition to unlocking the vehicle, these personalized operations cannot be automatically performed.
Based on the method, how to realize the automatic control of the vehicle can improve the accuracy of the automatic control, meet the personalized requirements of users, improve the user experience and become the focus of attention in the industry.
Disclosure of Invention
The invention provides a gesture-based vehicle control method, a gesture-based vehicle control device and a storage medium, which are used for solving the problems of inaccurate vehicle control and low safety.
According to a first aspect of the present invention, there is provided a gesture-based vehicle control method, comprising:
acquiring a first image acquired by a camera module loaded on a target vehicle, wherein a first real-time gesture of a target user is recorded in the first image;
based on the first image, searching a target gesture matched with the first real-time gesture in the user-defined gesture corresponding to the target user;
and based on a pre-stored mapping relation, searching a vehicle control result corresponding to the target gesture action, and controlling the target vehicle based on the corresponding vehicle control result, wherein the mapping relation characterizes the mapping relation between the gesture action and the vehicle control result.
Optionally, the searching, based on the first image, for a target gesture matched with the first real-time gesture in the custom gesture corresponding to the target user includes:
extracting skeleton nodes of the target user from the first image, and taking the positions and/or position changes of the skeleton nodes in the first real-time gesture action as gesture characteristics;
and searching the target gesture based on the gesture characteristics.
Optionally, the method further comprises:
acquiring a second image acquired by a camera module loaded on the target vehicle, wherein a second real-time gesture of the target user is recorded in the second image;
obtaining mapping specification information, wherein the mapping specification information characterizes: the second real-time gesture is mapped with the appointed vehicle control result;
and determining the second real-time gesture as a user-defined gesture corresponding to the target user, and updating the mapping relation according to the mapping specification information.
Optionally, the determining that the second real-time gesture is a user-defined gesture corresponding to the target user includes:
judging whether the second real-time gesture is matched with the user-defined gesture corresponding to the target user;
if not, the second real-time gesture is stored and is used as the user-defined gesture of the target user.
Optionally, the method further comprises:
after the identity characteristic of the target user is detected, the identity of the target user is determined based on the identity characteristic, so that the user-defined gesture action of the target user is found based on the identity.
Optionally, the identity feature includes one or more of:
voice characteristics, face characteristics, height characteristics.
Optionally, the vehicle control result includes one or more of the following:
unlocking, locking, igniting, flameout, opening the vehicle-mounted equipment, closing the vehicle-mounted equipment, controlling the vehicle-mounted equipment to perform specified change, advancing, reversing and executing parking processes.
According to a second aspect of the present invention, there is provided a gesture-based vehicle control apparatus comprising:
the system comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for acquiring a first image acquired by a camera module loaded on a target vehicle, and a first real-time gesture action of a target user is recorded in the first image;
the searching unit is used for searching a target gesture matched with the first real-time gesture in the user-defined gesture corresponding to the target user based on the first image;
the control unit is used for searching a vehicle control result corresponding to the target gesture action based on a pre-stored mapping relation, controlling the target vehicle based on the corresponding vehicle control result, and the mapping relation characterizes the mapping relation between the gesture action and the vehicle control result.
According to the gesture-based vehicle control method, the first image collected by the camera module loaded on the target vehicle can be obtained, the first real-time gesture action of the target user can be recorded in the first image, the target gesture action matched with the first real-time gesture action can be found in the user-defined gesture action corresponding to the target user based on the first image, and then the pre-stored mapping relation is found based on the target gesture action, so that the vehicle control result corresponding to the target gesture action is found, and the target vehicle is controlled based on the result.
By adopting the method, the matched vehicle control result can be identified based on the gesture action given by the user, and the automatic vehicle control is performed based on the result, so that the safety of the vehicle control can be improved. Moreover, the user can customize the gesture actions of the user, thereby meeting the personalized requirements of the user and improving the user experience.
The user can also identify the gesture corresponding to the user to find out the gesture corresponding to the user, and only the user can use the gesture to control the vehicle, and other users cannot operate the gesture, so that the accuracy and safety of vehicle control can be further improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flow chart diagram of a gesture-based vehicle control method according to an exemplary embodiment of the present invention;
FIG. 2 is a flow chart diagram of another gesture-based vehicle control method according to an exemplary embodiment of the present invention;
FIG. 3 is a flow chart diagram of another gesture-based vehicle control method according to an exemplary embodiment of the present invention;
FIG. 4 is a hardware configuration diagram of an electronic device in which a gesture-based vehicle control apparatus is located, according to an exemplary embodiment of the present invention;
FIG. 5 is a block diagram of a gesture-based vehicle control apparatus according to an exemplary embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Referring to fig. 1, fig. 1 is a schematic flow chart of a gesture-based vehicle control method according to an exemplary embodiment of the invention. The method may be applied to an electronic device having a memory, a processor, and the method may include the steps of:
102, acquiring a first image acquired by a camera module loaded on a target vehicle, wherein a first real-time gesture of a target user is recorded in the first image;
step 104, based on the first image, searching a target gesture matched with the first real-time gesture in the user-defined gesture corresponding to the target user;
and 106, searching a vehicle control result corresponding to the target gesture action based on a pre-stored mapping relation, and controlling the target vehicle based on the corresponding vehicle control result, wherein the mapping relation characterizes the mapping relation between the gesture action and the vehicle control result.
The above steps are described in detail below.
In this embodiment, the target vehicle may be loaded with a camera module, and the camera module may be a camera, for example, a camera on a vehicle recorder, or a specially configured camera, which is not limited in particular. The camera module can acquire images of the surrounding environment of the target vehicle in real time.
When a target user approaches to the target vehicle, the camera module can acquire an image containing the target user, and the target user is usually a driver and can also be other personnel. The target user may control the target vehicle by gesture actions, for example, by circling with a hand, sliding in all directions up, down, left, and right, and the like, and the specific form of the gesture actions is not particularly limited in this embodiment. In this case, the image capturing module may capture an image in which the gesture of the target user is recorded, and take the image as the first image, and the gesture recorded in the image is taken as the first real-time gesture.
In this embodiment, on the one hand, a first image may be acquired; on the other hand, a user-defined gesture corresponding to the target user may be obtained, and the user-defined gesture may be pre-stored. The first real-time gesture may then be matched with a pre-stored custom gesture to find a target gesture therefrom that matches the first real-time gesture.
In one example, during matching, skeleton nodes of the target user may be extracted from the first image, the position and/or the position change of each skeleton node in the first real-time gesture action is used as a gesture feature, then the target gesture action is searched for based on the gesture, for example, the gesture feature of the first real-time gesture action may be matched with the gesture feature of the user-defined gesture action, so as to obtain the matched user-defined gesture action as the target gesture action.
In another example, upon matching, the first image may also be input into a gesture recognition model that may predict the matching target gesture. The gesture recognition model may be obtained by training only the user-defined gesture motion of the target user, or may be obtained by training the user-defined gesture motion and the default gesture motion of all users, which is not particularly limited.
In addition, the process of extracting skeleton nodes to form gesture features and/or the process of finding a target gesture action based on the gesture features can be realized by inputting a gesture recognition model.
In this embodiment, a vehicle control result corresponding to each user-defined gesture action of the user may also be stored, for example, the vehicle control result may be locking, unlocking, igniting, extinguishing, opening the vehicle-mounted device (for example, opening the air conditioner), closing the vehicle-mounted device (for example, closing the air conditioner), controlling the vehicle-mounted device to perform a specified change (for example, opening music, closing music), travelling, reversing, executing a parking process, and the like. And then, the target vehicle can be controlled according to the searched vehicle control result, so that the effect of controlling the vehicle based on gesture actions is realized.
As can be seen from the above description, in one embodiment of the present invention, a first image acquired by a camera module loaded on a target vehicle may be acquired, a first real-time gesture of a target user may be recorded in the first image, a target gesture matched with the first real-time gesture may be found in a custom gesture corresponding to the target user based on the first image, and then a pre-stored mapping relationship may be found based on the target gesture, so as to find a vehicle control result corresponding to the target gesture, and control the target vehicle based on the result.
By adopting the method, the matched vehicle control result can be identified based on the gesture action given by the user, and the automatic vehicle control is performed based on the result, so that the safety of the vehicle control can be improved. Moreover, the user can customize the gesture actions of the user, thereby meeting the personalized requirements of the user and improving the user experience.
The user can also identify the gesture corresponding to the user to find out the gesture corresponding to the user, and only the user can use the gesture to control the vehicle, and other users cannot operate the gesture, so that the accuracy and safety of vehicle control can be improved.
Another embodiment of the gesture-based vehicle control method provided by the present invention is described below.
Referring to fig. 2, fig. 2 is a flow chart illustrating another gesture-based vehicle control method according to an exemplary embodiment of the invention.
In one example, the target vehicle may be loaded with an electronic device having computing capabilities, which is capable of performing the steps of the method described in this embodiment, and the method described in this embodiment may be applied to the electronic device.
In another example, the target vehicle may be loaded with an image capturing module for capturing an image without forming an electronic device for executing the vehicle control method, the image capturing module may transmit the captured image to a server, and the server may execute the steps of the method according to the present embodiment, and the method according to the present embodiment may be applied to the server.
The method of the embodiment may include the following steps:
step 202, a first image acquired by an imaging module loaded on a target vehicle is acquired, and a first real-time gesture of a target user is recorded in the first image.
In this embodiment, the specific method of step 202 may refer to the foregoing embodiments, and will not be described herein.
Step 204, after the identity feature of the target user is detected, determining the identity of the target user based on the identity feature, so as to find the user-defined gesture action of the target user based on the identity.
In this embodiment, whether the identity feature of the target user exists or not may be detected, and then the identity of the target user is identified based on the identity feature.
In one example, the identity feature may be a face feature of the target user, and then the face image of the target user may be acquired based on the camera module loaded on the target vehicle, and then the face feature may be extracted, and the identity of the target user may be identified based on the face feature.
In another example, the identity feature may be a voice feature of the target user, such as a voiceprint, and then the voice of the target user may be collected based on a sound collection module, such as a microphone, mounted on the target vehicle, and then the voice feature may be extracted, and the identity of the target user may be identified based on the voice feature.
In yet another example, the identity feature may also be a height feature, for example, the measured height of the target user may be calculated based on the size of the target user in the image as the height feature, and then the measured height of the target user may be used to determine the identity of the target user by performing a fake-licensed operation with the stored target height.
Of course, in addition to the above examples, the identity of the target user may be identified by other methods, for example, the identity of the target user may be identified according to bluetooth, wifi signals, etc. of the mobile phone of the target user, which is not limited to this example.
In this embodiment, after the identity of the target user is identified, the user-defined gesture corresponding to the target user may be found based on the identity. The determining of the user-defined gesture corresponding to the target user may include the following steps:
in one example, the target user may upload custom gesture actions in advance, and these custom gesture actions may be stored and bound to the identity of the target user. In addition, for each user-defined gesture action, the target user may further specify a corresponding vehicle control result, for example, the circled action corresponds to unlocking, the upward sliding action corresponds to turning on the air conditioner, the downward sliding action corresponds to turning off the air conditioner, and so on. On the basis, the mapping relation between the user-defined gesture actions and the vehicle control results can be stored, and the updating of the mapping relation is realized, so that the automatic vehicle control is performed based on the stored mapping relation in the subsequent step.
In another example, the target user's custom gesture action may also be learned. Referring to fig. 3, fig. 3 is a flow chart illustrating another gesture-based vehicle control method according to an exemplary embodiment of the present invention, including the steps of:
step 302, a second image acquired by an imaging module loaded on the target vehicle is acquired, and a second real-time gesture of the target user is recorded in the second image.
Step 304, obtaining mapping specification information, wherein the mapping specification information characterizes: and mapping the second real-time gesture action with a designated vehicle control result.
Step 306, determining the second real-time gesture as a user-defined gesture corresponding to the target user, and updating the mapping relation according to the mapping specification information.
In this embodiment, the camera module mounted on the target vehicle may further collect a second image, where a second real-time gesture of the target user may be recorded in the second image. Similarly, based on the second image, it may be determined whether the second real-time gesture recorded in the second image matches the user-defined gesture of the target user, however, in actual situations, the matching target gesture may not be determined from the user-defined gesture, and then automatic vehicle control may not be performed on the target vehicle based on the vehicle control result corresponding to the target gesture. Or, the determined vehicle control result corresponding to the matched target gesture is not the vehicle control expected by the target user. In these cases, the target user may choose to manually control the target vehicle.
For example, the target user may make a circled motion that the target vehicle may be expected to unlock, but the electronic device may not recognize the motion, and the target user may unlock manually. Alternatively, the electronic device may automatically perform an operation of turning on the air conditioner after recognition, but this is not desired by the target user, and may be manually unlocked.
In this embodiment, in the case where the target user performs the manual operation, the vehicle control result obtained after the target user performs the manual operation may be obtained, and the mapping specification information may be obtained based on the vehicle control result. The map specification information characterizes a mapping between the second real-time gesture motion and the specified vehicle control result. For example, for the above example, the map specification information may be "circled corresponding unlock".
The mapping specification information may be formed by user input or may be received from the outside.
In some examples, the user may first select a vehicle control result (taking the vehicle control result as a specified vehicle control result) in the interactive interface of the active learning gesture of the vehicle, and then collect a second image when the user makes a second real-time gesture, and identify the second real-time gesture from the second image, thereby forming mapping specified information.
In this embodiment, the second gesture may be used as a user-defined gesture corresponding to the target user, and the mapping relationship may be updated using the mapping specification information.
Optionally, before determining the second gesture as the custom gesture corresponding to the target user, it may also be determined whether the second real-time gesture matches with an existing custom gesture of the target user, so as to store the second real-time gesture and use the second real-time gesture as the custom gesture of the target user if the second real-time gesture does not match.
Although step 204 is performed after step 202 in the present embodiment, step 204 may be performed before step 202 or may be performed in parallel in practice, which is not limited in particular.
Step 206, based on the first image, finding a target gesture matched with the first real-time gesture in the user-defined gesture corresponding to the target user.
In this embodiment, the target gesture matched with the first real-time gesture may be found based on the first image, and the specific method may refer to the foregoing embodiment, which is not described herein.
In one example, some default gesture actions (also can be understood as standardized gesture actions) may be preset, if the first real-time gesture action does not match the user-defined gesture action of the target user, the first real-time gesture action may also match the default gesture action, the default gesture action corresponds to a vehicle control result, and the target vehicle may also be automatically controlled based on the vehicle control result.
Specifically, priorities can be set for user-defined gesture actions and default gesture actions, and matching is performed according to the order of the priorities from high to low. The priority may be preset or set by the user, and is not limited in particular.
Step 208, based on a pre-stored mapping relation, searching for a vehicle control result corresponding to the target gesture action, and controlling the target vehicle based on the corresponding vehicle control result, wherein the mapping relation characterizes the mapping relation between the gesture action and the vehicle control result.
In this embodiment, the specific method of step 208 may refer to the foregoing embodiment, and is not limited herein.
By adopting the method, the target vehicle can be controlled based on the user-defined gesture action of the target user, so that the safety is improved, and the user experience is improved. Moreover, the gesture of the target user can be learned, and the gesture can be automatically used as the user-defined gesture of the target user, so that the user is unaware and intelligent.
Corresponding to the gesture-based vehicle control method embodiment, the invention further provides an embodiment of the gesture-based vehicle control device.
The embodiment of the gesture-based vehicle control device can be applied to electronic equipment. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in a nonvolatile memory into a memory by a processor of an electronic device where the device is located for operation. In terms of hardware, as shown in fig. 4, a hardware configuration diagram of an electronic device where the gesture-based vehicle control apparatus of the present invention is located is shown.
Referring to fig. 4, there is provided an electronic device 40 comprising:
a processor 41; the method comprises the steps of,
a memory 42 for storing executable instructions of the processor;
wherein the processor 41 is configured to perform the above-mentioned method via execution of the executable instructions.
The processor 41 is capable of communicating with the memory 42 via a bus 43.
In addition to the processor, the memory, and the bus shown in fig. 4, the electronic device in which the apparatus is located in the embodiment generally includes other hardware according to the actual function of the electronic device, which will not be described herein.
Referring to fig. 5, fig. 5 is a block diagram illustrating a gesture-based vehicle control apparatus according to an exemplary embodiment of the present invention, which may be applied to the electronic device shown in fig. 4, including: an acquisition unit 510, a search unit 520, a control unit 536. Wherein, the liquid crystal display device comprises a liquid crystal display device,
an obtaining unit 510, configured to obtain a first image collected by a camera module loaded on a target vehicle, where a first real-time gesture of the target user is recorded in the first image;
the searching unit 520 is configured to search, based on the first image, for a target gesture that matches the first real-time gesture in a user-defined gesture corresponding to the target user;
the control unit 530 is configured to find a vehicle control result corresponding to the target gesture based on a pre-stored mapping relationship, and control the target vehicle based on the corresponding vehicle control result, where the mapping relationship characterizes a mapping relationship between each defined gesture and the vehicle control result.
Optionally, the searching unit 520 is specifically configured to:
extracting skeleton nodes of the target user from the first image, and taking the positions and/or position changes of the skeleton nodes in the first real-time gesture action as gesture characteristics;
and searching the target gesture based on the gesture characteristics.
Optionally, the method further comprises:
a learning module for:
acquiring a second image acquired by a camera module loaded on the target vehicle, wherein a second real-time gesture of the target user is recorded in the second image;
obtaining mapping specification information, wherein the mapping specification information characterizes: the second real-time gesture is mapped with the appointed vehicle control result;
and determining the second real-time gesture as a user-defined gesture corresponding to the target user, and updating the mapping relation according to the mapping specification information.
Optionally, the determining that the second real-time gesture is a user-defined gesture corresponding to the target user includes:
judging whether the second real-time gesture is matched with the user-defined gesture corresponding to the target user;
if not, the second real-time gesture is stored and is used as the user-defined gesture of the target user.
Optionally, the searching unit 520 is further configured to:
after the identity characteristic of the target user is detected, the identity of the target user is determined based on the identity characteristic, so that the user-defined gesture corresponding to the target user is found based on the identity.
Optionally, the identity feature includes one or more of:
voice characteristics, face characteristics, height characteristics.
Optionally, the vehicle control result includes one or more of the following:
unlocking, locking, igniting, flameout, opening the vehicle-mounted equipment, closing the vehicle-mounted equipment and controlling the vehicle-mounted equipment to change in a specified mode.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the methods referred to above.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (9)

1. A method of gesture-based vehicle control, the method comprising:
acquiring a first image acquired by a camera module loaded on a target vehicle, wherein a first real-time gesture of a target user is recorded in the first image;
based on the first image, searching a target gesture matched with the first real-time gesture in the user-defined gesture corresponding to the target user;
based on a pre-stored mapping relation, a vehicle control result corresponding to the target gesture action is found, and the target vehicle is controlled based on the corresponding vehicle control result, wherein the mapping relation characterizes mapping relation between each defined gesture action and the vehicle control result;
the updating process of the mapping relation comprises the following steps:
acquiring a second image acquired by a camera module loaded on the target vehicle, wherein a second real-time gesture of the target user is recorded in the second image;
obtaining mapping specification information, wherein the mapping specification information characterizes: the second real-time gesture is mapped with the appointed vehicle control result;
the obtaining the mapping specification information includes:
under the condition that the second real-time gesture recorded in the second image cannot be matched with the user-defined gesture of the target user, acquiring a vehicle control result obtained after the target user performs manual operation, acquiring the mapping specification information based on the vehicle control result, wherein the mapping specification information characterizes the mapping between the second real-time gesture and the vehicle control result obtained after the manual operation,
or after the target user selects a specified vehicle control result, identifying the second real-time gesture recorded in the second image, and acquiring the mapping specification information based on the second real-time gesture, wherein the mapping specification information characterizes the mapping between the second real-time gesture and the specified vehicle control result selected by the target user;
and determining the second real-time gesture as a user-defined gesture corresponding to the target user, and updating the mapping relation according to the mapping specification information.
2. The method of claim 1, wherein the finding, based on the first image, a target gesture that matches the first real-time gesture from among the custom gesture corresponding to the target user includes:
extracting skeleton nodes of the target user from the first image, and taking the position and/or position change of each skeleton node in the first real-time gesture action as gesture characteristics;
and searching the target gesture based on the gesture characteristics.
3. The method of claim 1, wherein the determining that the second real-time gesture is a custom gesture corresponding to the target user comprises:
judging whether the second real-time gesture is matched with the user-defined gesture corresponding to the target user;
if not, the second real-time gesture is stored and is used as the user-defined gesture of the target user.
4. A method according to any one of claims 1 to 3, further comprising:
after the identity characteristic of the target user is detected, the identity of the target user is determined based on the identity characteristic, so that the user-defined gesture corresponding to the target user is found based on the identity.
5. The method of claim 4, wherein the identity feature comprises one or more of:
voice characteristics, face characteristics, height characteristics.
6. A method according to any one of claims 1 to 3, wherein the vehicle control results include one or more of:
unlocking, locking, igniting, flameout, opening the vehicle-mounted equipment, closing the vehicle-mounted equipment, controlling the vehicle-mounted equipment to perform specified change, advancing, reversing and executing parking processes.
7. A gesture-based vehicle control apparatus, comprising:
the system comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for acquiring a first image acquired by a camera module loaded on a target vehicle, and a first real-time gesture action of a target user is recorded in the first image;
the searching unit is used for searching a target gesture matched with the first real-time gesture in the user-defined gesture corresponding to the target user based on the first image;
the control unit is used for searching a vehicle control result corresponding to the target gesture action based on a pre-stored mapping relation, controlling the target vehicle based on the corresponding vehicle control result, and the mapping relation characterizes the mapping relation between the gesture action and the vehicle control result;
the updating process of the mapping relation comprises the following steps:
acquiring a second image acquired by a camera module loaded on the target vehicle, wherein a second real-time gesture of the target user is recorded in the second image;
obtaining mapping specification information, wherein the mapping specification information characterizes: the second real-time gesture is mapped with the appointed vehicle control result;
the obtaining the mapping specification information includes:
under the condition that the second real-time gesture recorded in the second image cannot be matched with the user-defined gesture of the target user, acquiring a vehicle control result obtained after the target user performs manual operation, acquiring the mapping specification information based on the vehicle control result, wherein the mapping specification information characterizes the mapping between the second real-time gesture and the vehicle control result obtained after the manual operation,
or after the target user selects a specified vehicle control result, identifying the second real-time gesture recorded in the second image, and acquiring the mapping specification information based on the second real-time gesture, wherein the mapping specification information characterizes the mapping between the second real-time gesture and the specified vehicle control result selected by the target user;
and determining the second real-time gesture as a user-defined gesture corresponding to the target user, and updating the mapping relation according to the mapping specification information.
8. A storage medium having a program stored thereon, which when executed by a processor, implements the steps of the method of any of claims 1-6.
9. An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of claims 1-6 when the program is executed by the processor.
CN202110993038.3A 2021-08-27 2021-08-27 Gesture-based vehicle control method, device and storage medium Active CN113696849B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110993038.3A CN113696849B (en) 2021-08-27 2021-08-27 Gesture-based vehicle control method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110993038.3A CN113696849B (en) 2021-08-27 2021-08-27 Gesture-based vehicle control method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113696849A CN113696849A (en) 2021-11-26
CN113696849B true CN113696849B (en) 2023-04-28

Family

ID=78655725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110993038.3A Active CN113696849B (en) 2021-08-27 2021-08-27 Gesture-based vehicle control method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113696849B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114201047A (en) * 2021-12-10 2022-03-18 珠海格力电器股份有限公司 Control method and device of control panel

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426658A (en) * 2015-10-29 2016-03-23 东莞酷派软件技术有限公司 Vehicle pre-starting method and related apparatus
US20170120932A1 (en) * 2015-11-03 2017-05-04 GM Global Technology Operations LLC Gesture-based vehicle-user interaction
CN108255284A (en) * 2016-12-28 2018-07-06 上海合既得动氢机器有限公司 A kind of gestural control system and electric vehicle
CN107719303A (en) * 2017-09-05 2018-02-23 观致汽车有限公司 A kind of door-window opening control system, method and vehicle
US11417163B2 (en) * 2019-01-04 2022-08-16 Byton North America Corporation Systems and methods for key fob motion based gesture commands
CN110435561B (en) * 2019-07-26 2021-05-18 中国第一汽车股份有限公司 Vehicle control method and system and vehicle
CN110764616A (en) * 2019-10-22 2020-02-07 深圳市商汤科技有限公司 Gesture control method and device
CN112698716A (en) * 2019-10-23 2021-04-23 上海博泰悦臻电子设备制造有限公司 In-vehicle setting and control method, system, medium and device based on gesture recognition
CN111625086A (en) * 2020-04-24 2020-09-04 爱驰汽车有限公司 Vehicle interaction method, system, device and storage medium based on user action

Also Published As

Publication number Publication date
CN113696849A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
US10902056B2 (en) Method and apparatus for processing image
KR102298412B1 (en) Surgical image data learning system
CN111857356B (en) Method, device, equipment and storage medium for recognizing interaction gesture
JP2022504704A (en) Target detection methods, model training methods, equipment, equipment and computer programs
CN112606796B (en) Automatic opening and closing control method and system for vehicle trunk and vehicle
CN103729120B (en) For producing the method and its electronic equipment of thumbnail
CN108983979B (en) Gesture tracking recognition method and device and intelligent equipment
CN106548145A (en) Image-recognizing method and device
US20150169942A1 (en) Terminal configuration method and terminal
CN107871001B (en) Audio playing method and device, storage medium and electronic equipment
CN107729092B (en) Automatic page turning method and system for electronic book
US11825278B2 (en) Device and method for auto audio and video focusing
CN112287994A (en) Pseudo label processing method, device, equipment and computer readable storage medium
CN107992841A (en) The method and device of identification objects in images, electronic equipment, readable storage medium storing program for executing
CN106203306A (en) The Forecasting Methodology at age, device and terminal
CN111985385A (en) Behavior detection method, device and equipment
CN106295599A (en) The control method of vehicle and device
KR20190140519A (en) Electronic apparatus and controlling method thereof
CN106446946A (en) Image recognition method and device
CN108853953A (en) A kind of shape up exercise method, apparatus, system, computer equipment and storage medium
CN113696849B (en) Gesture-based vehicle control method, device and storage medium
CN109857894A (en) Parking lot car searching method, apparatus, storage medium and computer equipment
CN114581998A (en) Deployment and control method, system, equipment and medium based on target object association feature fusion
CN113696904B (en) Processing method, device, equipment and medium for controlling vehicle based on gestures
KR20190119205A (en) Electronic device and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231121

Address after: Floors 3-7, Building T3, No. 377 Songhong Road, Changning District, Shanghai, 200000

Patentee after: Shanghai Jiayu Intelligent Technology Co.,Ltd.

Address before: 200050 room 8041, 1033 Changning Road, Changning District, Shanghai (nominal Floor 9)

Patentee before: Shanghai xianta Intelligent Technology Co.,Ltd.