CN115617046A - Path planning method and device, electronic equipment and storage medium - Google Patents

Path planning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115617046A
CN115617046A CN202211356890.0A CN202211356890A CN115617046A CN 115617046 A CN115617046 A CN 115617046A CN 202211356890 A CN202211356890 A CN 202211356890A CN 115617046 A CN115617046 A CN 115617046A
Authority
CN
China
Prior art keywords
path
brain
data
computer
path planning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211356890.0A
Other languages
Chinese (zh)
Inventor
张晓胜
杨斯淇
裴斌
张栋
刘大猛
李宁
房远志
张晨辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
FAW Group Corp
Tianjin Institute of Advanced Equipment of Tsinghua University
Original Assignee
Tsinghua University
FAW Group Corp
Tianjin Institute of Advanced Equipment of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, FAW Group Corp, Tianjin Institute of Advanced Equipment of Tsinghua University filed Critical Tsinghua University
Priority to CN202211356890.0A priority Critical patent/CN115617046A/en
Publication of CN115617046A publication Critical patent/CN115617046A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses a path planning method, a path planning device, electronic equipment and a storage medium. The method can comprise the following steps: responding to a path planning instruction, acquiring path planning data, identifying the path planning data, and generating an initial path according to an acquired first identification result, wherein the path planning data comprises position data and/or voice data for path planning, and the position data is acquired by acquiring a target position; acquiring brain-computer data for adjusting the initial path, identifying the brain-computer data, and obtaining path adjustment information according to the obtained second identification result; and adjusting the initial path based on the path adjustment information to obtain a target path. The technical scheme of the embodiment of the invention realizes the effect of flexibly planning the path, thereby meeting the requirement of flexibly controlling the mobile robot in reality.

Description

Path planning method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of automatic control, in particular to a path planning method and device, electronic equipment and a storage medium.
Background
With the development of science and technology, the mobile robot is widely applied to the aspects of home service, logistics storage, scene detection and the like. In order to enable the mobile robot to efficiently complete various complex tasks, a path can be planned in advance according to the tasks, so that the mobile robot can move along the path to complete the tasks.
However, the current path planning can only be realized through a preset path, and cannot meet the requirement of flexibly controlling the mobile robot in reality.
Disclosure of Invention
The embodiment of the invention provides a path planning method and device, electronic equipment and a storage medium, so as to realize the effect of flexible path planning.
According to an aspect of the present invention, there is provided a path planning method, which may include:
responding to a path planning instruction, acquiring path planning data, identifying the path planning data, and generating an initial path according to an acquired first identification result, wherein the path planning data can comprise position data and/or voice data for path planning, and the position data is acquired by acquiring a target position;
acquiring brain-computer data for adjusting the initial path, identifying the brain-computer data, and obtaining path adjustment information according to an obtained second identification result;
and adjusting the initial path based on the path adjustment information to obtain a target path.
According to another aspect of the present invention, there is provided a path planning apparatus, which may include:
the initial path generation module is used for responding to a path planning instruction, acquiring path planning data, identifying the path planning data and generating an initial path according to an acquired first identification result, wherein the path planning data comprises position data and/or voice data used for path planning, and the position data is acquired by acquiring a target position;
the path adjustment information obtaining module is used for obtaining brain-computer data used for adjusting the initial path, identifying the brain-computer data and obtaining path adjustment information according to an obtained second identification result;
and the target path obtaining module is used for adjusting the initial path based on the path adjusting information to obtain a target path.
According to another aspect of the present invention, there is provided an electronic device, which may include:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor such that the at least one processor, when executed, implements the path planning method provided by any of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium having stored thereon computer instructions for causing a processor to execute a method of path planning provided by any of the embodiments of the present invention.
According to the technical scheme of the embodiment of the invention, path planning data are obtained by responding to a path planning instruction, the path planning data are identified, an initial path is generated according to an obtained first identification result, the path planning data comprise position data and/or voice data for path planning, and the position data are obtained by collecting a target position; acquiring brain-computer data for adjusting the initial path, identifying the brain-computer data, and obtaining path adjustment information according to the obtained second identification result; and adjusting the initial path based on the path adjustment information to obtain a target path. According to the technical scheme, the path planning is not required to be carried out through the preset path, the initial path can be generated in real time according to the acquired path planning data, and then the path adjusting information is generated according to the acquired brain data to adjust the initial path, so that the effect of flexibly planning the path is achieved, and the requirement of flexibly controlling the mobile robot in reality is met.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present invention, nor are they intended to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a path planning method according to an embodiment of the present invention;
fig. 2 is a flowchart of another path planning method according to an embodiment of the present invention;
fig. 3 is a flowchart of an example of initial path generation in another path planning method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an example of three-dimensional space modeling in another path planning method according to the embodiment of the present invention;
fig. 5 is a flowchart of another path planning method provided in accordance with an embodiment of the present invention;
fig. 6 is a flowchart of an example of brain-computer data identification in another path planning method according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a data identification model in another path planning method according to an embodiment of the present invention;
fig. 8 is a flowchart of an alternative example of another path planning method according to an embodiment of the present invention;
fig. 9 is a block diagram of a path planning apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device implementing the path planning method according to the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. The cases of "target", "original", etc. are similar and will not be described in detail herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a flowchart of a path planning method provided in an embodiment of the present invention. The embodiment can be applied to the path planning situation. The method can be executed by the path planning device provided by the embodiment of the invention, the device can be realized by software and/or hardware, the device can be integrated on electronic equipment, and the electronic equipment can be various user terminals or servers.
Referring to fig. 1, the method of the embodiment of the present invention specifically includes the following steps:
s110, responding to a path planning instruction, obtaining path planning data, identifying the path planning data, and generating an initial path according to an obtained first identification result, wherein the path planning data comprises position data and/or voice data used for path planning, and the position data is obtained by collecting a target position.
The path planning instruction may be an instruction for performing path planning, which is triggered automatically or manually, and the path planning data is obtained in response to the path planning instruction. In practical application, optionally, the path planning data may include position data and/or voice data that can be used for path planning, where the position data may be data acquired from a target position, such as mouth shape data acquired from a mouth, gesture data acquired from a hand, or foot gesture data acquired from a foot, and the like, and is not specifically limited herein; the voice data may be data obtained by collecting sound.
And identifying the path planning data, and generating an initial path according to the first identification result obtained by identifying the path planning data. For example, the mouth shape data is identified based on the structured light or the Time of flight (TOF), and an initial path is generated according to the mouth shape identification result obtained by the mouth shape data; recognizing voice data based on an acoustic model, and generating an initial path according to a voice recognition result obtained by the recognition; etc., which are not specifically limited herein.
And S120, acquiring brain-computer data for adjusting the initial path, identifying the brain-computer data, and obtaining path adjustment information according to the obtained second identification result.
In practical applications, there may be some difference between the initial path generated based on the above steps and an actually expected movement path for controlling the movement of the mobile robot, and brain data for adjusting the initial path may be acquired in order to plan a target path more matching with the movement path, and optionally, the brain data may include at least one of a sync potential, a desynchronization potential, a steady-state visual evoked potential, a slow cortex potential, a μ rhythm, and a β rhythm. Further, recognizing the brain-computer data, and obtaining path adjustment information according to the obtained second recognition result, wherein the path adjustment information is used for indicating how to adjust the initial path, for example, moving an end point in the initial path 1 meter forward; rotating the N +1 th waypoint counterclockwise by 30 ° about the center with the nth waypoint in the initial path as the center; etc., and are not specifically limited herein.
It should be noted that the brain data is data formed by summing postsynaptic potentials generated synchronously by a large number of neurons during brain activity, and can record the electric wave change during brain activity, and is a general reflection of electrophysiological activities of brain nerve cells on the surface of cerebral cortex or scalp, that is, it can accurately reflect the thought of human beings. On the basis, considering that the mobile robot cannot effectively receive the voice command sent by the human under the environment of noisy environment or outer space and the like, but can effectively receive the brain-computer command sent by the human through devices such as a brain-computer chip and the like, the path adjusting information is generated by acquiring brain-computer data, so that the effect of effectively and accurately generating the path adjusting information under various environments is achieved.
And S130, adjusting the initial path based on the path adjustment information to obtain a target path.
The initial path is adjusted based on the path adjustment information, so that a target path more conforming to an actually expected moving path is obtained.
According to the technical scheme of the embodiment of the invention, path planning data are obtained by responding to a path planning instruction, the path planning data are identified, an initial path is generated according to an obtained first identification result, the path planning data comprise position data and/or voice data for path planning, and the position data are obtained by collecting a target position; acquiring brain-computer data for adjusting the initial path, identifying the brain-computer data, and obtaining path adjustment information according to an obtained second identification result; and adjusting the initial path based on the path adjustment information to obtain a target path. According to the technical scheme, the path planning is not required to be carried out through the preset path, the initial path can be generated in real time according to the acquired path planning data, and then the path adjusting information is generated according to the acquired brain data to adjust the initial path, so that the effect of flexibly planning the path is achieved, and the requirement of flexibly controlling the mobile robot in reality is met.
Fig. 2 is a flowchart of another path planning method provided in the embodiment of the present invention. The present embodiment is optimized based on the above technical solutions. In this embodiment, optionally, when the path planning data includes the position data, identifying the path planning data, and generating the initial path according to the obtained first identification result, may include: identifying the part of data, and generating an initial path according to an obtained first identification result, wherein the initial path is positioned in a three-dimensional space where the target part is positioned; after generating the initial path according to the obtained first recognition result, the path planning method further includes: acquiring a target map associated with a three-dimensional space and based on instant positioning and map construction, and matching an initial path to the target map; and updating the initial path according to the obtained first matching result. The same or corresponding terms as those in the above embodiments are not explained in detail herein.
Referring to fig. 2, the method of the present embodiment may specifically include the following steps:
s210, responding to the path planning instruction, acquiring path planning data, wherein the path planning data can comprise position data.
S220, identifying the part of the data, and generating an initial path according to the obtained first identification result, wherein the initial path is positioned in a three-dimensional space where the target part is positioned.
Wherein the portion of data is identified to generate an initial path in a three-dimensional space in which the target portion is located.
And S230, obtaining a target map which is associated with the three-dimensional space and is constructed based on the instant positioning and the map, matching the initial path to the target map, and updating the initial path according to the obtained first matching result.
The target map may include a map obtained based on a simultaneous localization and mapping (SLAM) associated with a three-dimensional space, or an SLAM map of a location near a position where the mobile robot is moving. Since the initial path generated based on the position data and located in the three-dimensional space does not necessarily match the target map, for example, in the case where the target map is obtained based on laser radar reconstruction and the position data is obtained based on camera acquisition, the initial path does not match the target map, so that the initial path can be matched to the target map, and the initial path matched to the target map is obtained. In this way, after the target path is subsequently generated based on the initial path, the mobile robot can be controlled to move along the target path matched with the target map.
And S240, acquiring brain-computer data for adjusting the initial path, identifying the brain-computer data, and obtaining path adjustment information according to the obtained second identification result.
And S250, adjusting the initial path based on the path adjustment information to obtain a target path.
According to the technical scheme of the embodiment of the invention, the initial path in the three-dimensional space where the target part is located is obtained by identifying the part of data; furthermore, since the initial path does not necessarily match the target map, and the moving process of the mobile robot is controlled based on the target map, the initial path may be matched to the target map, thereby obtaining the initial path matching the target map.
An optional technical solution, in a case that the part data includes a part video acquired by collecting a target part under at least two view angles, identifying the part data, and generating an initial path according to an obtained first identification result, may include: aiming at the part video acquired under each visual angle, performing framing on the part video to obtain a part image, and performing feature extraction on the part image to obtain image features; matching image features in the images of different parts, and reconstructing a three-dimensional space according to the obtained second matching result; acquiring pointing information of the target position from the three-dimensional space, obtaining an intersection point of the extension direction pointed by the target position and the ground according to the pointing information, and generating an initial path according to the intersection point.
In order to generate an initial path in a three-dimensional space, a target part is collected under at least two visual angles to obtain a part video. On the basis, the part video acquired under each visual angle can be framed to obtain at least one part image, and then, the feature extraction is respectively carried out on each part image to obtain the corresponding image feature. Because the part images collected under different visual angles at the same moment have the same feature points, the image features in different part images can be matched, specifically, the image features in different part images collected under the same moment can be matched, so that a three-dimensional space is reconstructed. Further, since the reconstructed three-dimensional space includes not only the environment information but also the location information of the target location, the pointing information of the target location can be obtained from the three-dimensional space, and the pointing information can be considered as one of the location information and is used to represent the related information of the pointing of the target location. The intersection points of the extending direction pointed by the target part and the ground can be obtained according to the pointing information, and then the initial path can be generated according to the intersection points, so that the effect of effectively generating the initial path is achieved.
In order to more clearly understand the technical solution, the following description is given by way of example with reference to specific examples. For example, referring to fig. 3, taking the target portion as a hand as an example, the hand images of the hand are collected by a camera at different viewing angles, and image features (such as Scale-invariant feature transform (SIFT)) are extracted from the hand images; these image features are then matched in different hand images, jointly optimizing a set of three-dimensional points and the pose of the camera to be consistent with these matches, thereby generating a three-dimensional space. On the basis, the pointing information of the hand is acquired from the three-dimensional space, so that the intersection point of the extending direction of the hand pointing and the ground is obtained according to the pointing information, and the initial path can be generated according to the intersection points. Finally, the initial path is matched to a SLAM map for controlling the movement of the mobile robot.
In practical applications, optionally, the generation (i.e., modeling) process of the three-dimensional space may be implemented based on a Block-NeRF model. Specifically, a hand image is input into a Block-NeRF model, and then a three-dimensional space is obtained according to an output result of the Block-NeRF model. The specific working process of the Block-NeRF model is shown in fig. 4: modeling radiation field and density of three-dimensional space (namely three-dimensional scene) based on Block-NeRF model weight; volume rendering is used to compose hand images at perspectives other than the acquired perspective, this composition process renders the observed scene content from a set of input images and their camera poses given the three-dimensional scene, and then from previously unobserved viewpoints.
Fig. 5 is a flowchart of another path planning method provided in the embodiment of the present invention. The present embodiment is optimized based on the above technical solutions. In this embodiment, optionally, identifying the brain-computer data, and obtaining the path adjustment information according to the obtained second identification result may include: acquiring a trained brain-computer data recognition model, wherein the brain-computer data recognition model comprises a brain-computer feature extraction network and a brain-computer feature recognition network; inputting brain-computer data into a brain-computer feature extraction network, and obtaining brain-computer features according to output results of the brain-computer feature extraction network; and inputting the brain-computer characteristics into the brain-computer characteristic identification network, and obtaining path adjustment information according to a second identification result output by the brain-computer characteristic identification network. The same or corresponding terms as those in the above embodiments are not explained in detail herein.
Referring to fig. 5, the method of this embodiment may specifically include the following steps:
s310, responding to a path planning instruction, obtaining path planning data, identifying the path planning data, and generating an initial path according to an obtained first identification result, wherein the path planning data comprises position data and/or voice data used for path planning, and the position data is obtained by collecting a target position.
S320, obtaining brain-computer data used for adjusting the initial path and a trained brain-computer data recognition model, wherein the brain-computer data recognition model comprises a brain-computer feature extraction network and a brain-computer feature recognition network.
The brain-computer data recognition model is used for recognizing brain-computer data, and may include a brain-computer feature extraction network for extracting brain-computer features from the brain-computer data, and a brain-computer feature recognition network for recognizing the brain-computer features. In other words, the brain-computer data recognition model includes a brain-computer feature extraction network and a brain-computer feature recognition network connected to each other. In practical applications, optionally, the number of the brain-computer feature extraction networks may be one, two, or more, and each brain-computer feature extraction network may be configured to extract brain-computer features in the brain-computer data corresponding to some or some of the path adjustment information. In practice, one path adjustment information may be understood as one path adjustment command, i.e., human mind information for adjusting the initial path. Still alternatively, the brain feature extraction network may be an impulse neural network (SNN) or a Hodgkin-Huxley network, where the Hodgkin-Huxley network may be understood as a specific example of the SNN.
And S330, inputting brain-computer data into a brain-computer feature extraction network, and obtaining brain-computer features according to output results of the brain-computer feature extraction network.
And S340, inputting the brain-computer characteristics into the brain-computer characteristic identification network, and obtaining path adjustment information according to a second identification result output by the brain-computer characteristic identification network.
And S350, adjusting the initial path based on the path adjustment information to obtain a target path.
According to the technical scheme of the embodiment of the invention, the trained brain-computer data recognition model is obtained, and the brain-computer data recognition model can comprise a brain-computer characteristic extraction network and a brain-computer characteristic recognition network; and then, inputting the brain-computer data into the brain-computer characteristic extraction network to obtain brain-computer characteristics, and then inputting the brain-computer characteristics into the brain-computer characteristic identification network to obtain path adjustment information, thereby realizing the effect of effectively identifying the path adjustment information.
According to an optional technical scheme, the brain-computer feature recognition network can comprise a generator connected with the brain-computer feature extraction network and a discriminator connected with the generator; inputting the brain-computer features into the brain-computer feature recognition network, and obtaining the path adjustment information according to the second recognition result output by the brain-computer feature recognition network, which may include: inputting brain-computer characteristics into a generator, and obtaining simulation data for simulating brain-computer data according to an output result of the generator; and inputting the brain-computer data and the simulation data into the discriminator, and obtaining the path adjustment information according to a second recognition result output by the discriminator. That is, the brain-computer features output by the brain-computer feature extraction network may be input to the generator, so that the generator may simulate the brain-computer data according to the input brain-computer features to obtain simulated data. Further, the brain-computer data and the simulation data are input to the discriminator, so that the discriminator recognizes the path adjustment information from both.
In order to understand the above technical solutions more vividly, specific examples are described below for illustration. As shown in fig. 6, a multi-compartmental synaptic connection model and a plurality of different types of Hodgkin-Huxley models (specific examples of SNN models) are established; acquiring a large amount of brain-computer data through a brain-computer chip, and preprocessing the acquired brain-computer data, wherein the brain-computer data can comprise event-related synchronous potential, desynchronized potential, steady-state visual evoked potential, slow cortex potential, mu rhythm and beta rhythm, which is equivalent to obtaining element grid input; training a deep learning model by utilizing the preprocessed brain-computer data to obtain a data recognition model; and recognizing the input brain-computer data based on the data recognition model to obtain path adjustment information (namely human idea information). On this basis, illustratively, referring to fig. 7, the data recognition model may include SNNs, generators for capturing data distributions, and discriminators for estimating the probability that a sample comes from training data, with the generators being iteratively evaluated through a countermeasure process to determine which model best fits the actual data distribution.
In order to better understand the above technical solutions as a whole, the following description is given by way of example with reference to specific examples. Taking a mobile robot represented by an Automated Guided Vehicle (agv) as an example, refer to the agv path planning process based on brain-computer chip and gesture recognition shown in fig. 8, and specifically:
1. the camera is used for collecting hand videos or gesture videos.
2. And identifying the hand video by using a Block-NeRF model to generate an initial path, matching the initial path to an SLAM map, and updating the initial path according to a matching result.
3. Brain-computer data is acquired through a brain-computer chip, and is identified through an SNN bionic learning algorithm and a generation antagonistic neural network (a generator + a discriminator), so that human idea information is obtained.
4. And adjusting an initial path on the SLAM map based on human mind information, and then obtaining a target path under the condition that the detection is finished (namely no new brain-computer data exists), so as to perform navigation control on the AGV.
5. In addition, the brain-computer acquisition equipment can transmit the acquired brain-computer data to the controller, so that the controller can control the AGV tracks and speed in real time according to the human idea.
Fig. 9 is a block diagram of a path planning apparatus according to an embodiment of the present invention, which is configured to execute the path planning method according to any of the embodiments. The device and the path planning method in the embodiments belong to the same inventive concept, and details which are not described in detail in the embodiment of the path planning device can refer to the embodiment of the path planning method. Referring to fig. 9, the apparatus may specifically include: an initial path generating module 410, a path adjustment information obtaining module 420 and a target path obtaining module 430.
The initial path generating module 410 is configured to, in response to a path planning instruction, obtain path planning data, identify the path planning data, and generate an initial path according to an obtained first identification result, where the path planning data may include position data and/or voice data used for path planning, and the position data is obtained by acquiring a target position;
a path adjustment information obtaining module 420, configured to obtain brain-computer data that may be used to adjust the initial path, identify the brain-computer data, and obtain path adjustment information according to the obtained second identification result;
and a target path obtaining module 430, configured to adjust the initial path based on the path adjustment information to obtain a target path.
Optionally, when the path planning data includes location data, the initial path generating module 410 may include:
the initial path generating unit is used for identifying the part of the data and generating an initial path according to an obtained first identification result, wherein the initial path is positioned in a three-dimensional space where the target part is positioned;
the above path planning apparatus may further include:
the initial path matching module is used for acquiring a target map which is associated with a three-dimensional space and is constructed based on instant positioning and a map after generating an initial path according to the obtained first recognition result, and matching the initial path to the target map;
and the initial path updating module is used for updating the initial path according to the obtained first matching result.
On this basis, optionally, when the position data may include a position video acquired from the target position at least two viewing angles, the initial path generating unit may include:
the image characteristic obtaining subunit is used for performing framing on the part video acquired under each view angle to obtain a part image, and performing characteristic extraction on the part image to obtain image characteristics;
the three-dimensional space reconstruction subunit is used for matching the image characteristics in the images of different parts and reconstructing a three-dimensional space according to the obtained second matching result;
and the initial path generating subunit is used for acquiring the pointing information of the target part from the three-dimensional space, obtaining an intersection point of the extending direction pointed by the target part and the ground according to the pointing information, and generating the initial path according to the intersection point.
Alternatively, the target site may include a hand.
Optionally, the path adjustment information obtaining module 420 may include:
the brain-computer data recognition model unit is used for acquiring a trained brain-computer data recognition model, wherein the brain-computer data recognition model comprises a brain-computer feature extraction network and a brain-computer feature recognition network;
the brain-computer characteristic obtaining unit is used for inputting brain-computer data into the brain-computer characteristic extraction network and obtaining brain-computer characteristics according to an output result of the brain-computer characteristic extraction network;
and the path adjustment information obtaining unit is used for inputting the brain-computer characteristics into the brain-computer characteristic identification network and obtaining the path adjustment information according to the second identification result output by the brain-computer characteristic identification network.
On this basis, an optional brain-computer feature recognition network includes a generator connected to the brain-computer feature extraction network and a discriminator connected to the generator, and the path adjustment information obtaining unit may include:
the analog data obtaining subunit is used for inputting the brain-computer characteristics into the generator and obtaining analog data for simulating the brain-computer data according to the output result of the generator;
and the path adjustment information obtaining subunit is used for inputting the brain-computer data and the simulation data into the discriminator and obtaining the path adjustment information according to the second identification result output by the discriminator.
Alternatively, the brain-computer data may include at least one of a synching potential, a desynchronization potential, a steady-state visual evoked potential, a slow cortical potential, a μ rhythm, and a β rhythm; and/or the presence of a gas in the gas,
the brain feature extraction network may comprise an impulse neural network or a Hodgkin-Huxley network.
The path planning device provided by the embodiment of the invention responds to a path planning instruction through the initial path generation module, acquires path planning data, identifies the path planning data, and generates an initial path according to an acquired first identification result, wherein the path planning data comprises position data and/or voice data for path planning, and the position data is acquired by acquiring a target position; acquiring brain-computer data for adjusting the initial path through a path adjustment information acquisition module, identifying the brain-computer data, and acquiring path adjustment information according to an acquired second identification result; and adjusting the initial path through a target path obtaining module based on the path adjustment information to obtain a target path. According to the device, path planning is not required to be carried out through a preset path, an initial path can be generated in real time according to the acquired path planning data, and then path adjustment information is generated according to the acquired brain data to adjust the initial path, so that the effect of flexibly planning the path is achieved, and the requirement of flexibly controlling the mobile robot in reality is met.
The path planning device provided by the embodiment of the invention can execute the path planning method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, in the embodiment of the path planning apparatus, each included unit and each included module are only divided according to functional logic, but are not limited to the above division, as long as the corresponding function can be implemented; in addition, the specific names of the functional units are only for the convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
FIG. 10 illustrates a schematic diagram of an electronic device 10 that may be used to implement embodiments of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 10, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to the bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The processor 11 performs the various methods and processes described above, such as a path planning method.
In some embodiments, the path planning method may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the path planning method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the path planning method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of path planning, comprising:
responding to a path planning instruction, acquiring path planning data, identifying the path planning data, and generating an initial path according to an acquired first identification result, wherein the path planning data comprises position data and/or voice data for path planning, and the position data is acquired by acquiring a target position;
acquiring brain-computer data for adjusting the initial path, identifying the brain-computer data, and obtaining path adjustment information according to an obtained second identification result;
and adjusting the initial path based on the path adjustment information to obtain a target path.
2. The method of claim 1, wherein when the path plan data includes the location data, the identifying the path plan data and generating an initial path according to the obtained first identification result comprises:
identifying the position data, and generating an initial path according to an obtained first identification result, wherein the initial path is located in a three-dimensional space where the target position is located;
after the generating an initial path according to the obtained first recognition result, the method further includes:
acquiring a target map which is associated with the three-dimensional space and is constructed based on instant positioning and a map, and matching the initial path to the target map;
and updating the initial path according to the obtained first matching result.
3. The method of claim 2, wherein when the position data comprises a portion video acquired from the target portion at least two view angles, the identifying the position data and generating an initial path according to the obtained first identification result comprises:
for the part video acquired under each view angle, framing the part video to obtain a part image, and performing feature extraction on the part image to obtain image features;
matching the image characteristics in different position images, and reconstructing the three-dimensional space according to an obtained second matching result;
and acquiring pointing information of the target part from the three-dimensional space, acquiring an intersection point of the extension direction pointed by the target part and the ground according to the pointing information, and generating an initial path according to the intersection point.
4. The method of claim 2 or 3, wherein the target site comprises a hand.
5. The method according to claim 1, wherein identifying the brain-computer data and obtaining path adjustment information based on the obtained second identification result comprises:
acquiring a trained brain-computer data recognition model, wherein the brain-computer data recognition model comprises a brain-computer feature extraction network and a brain-computer feature recognition network;
inputting the brain-computer data into the brain-computer feature extraction network, and obtaining brain-computer features according to output results of the brain-computer feature extraction network;
and inputting the brain-computer characteristics into the brain-computer characteristic identification network, and obtaining path adjustment information according to a second identification result output by the brain-computer characteristic identification network.
6. The method of claim 5, wherein the brain-computer feature recognition network comprises a generator connected to the brain-computer feature extraction network, and a discriminator connected to the generator;
the inputting the brain-computer characteristics into the brain-computer characteristic recognition network, and obtaining path adjustment information according to a second recognition result output by the brain-computer characteristic recognition network includes:
inputting the brain-computer characteristics into the generator, and obtaining simulation data for simulating the brain-computer data according to an output result of the generator;
and inputting the brain-computer data and the simulation data into the discriminator, and obtaining path adjustment information according to a second recognition result output by the discriminator.
7. The method of claim 5 or 6, wherein the brain data comprises at least one of a synchonium, a desynchronous, a homeostatic visual evoked potential, a bradycortical potential, a μ rhythm, and a β rhythm; and/or the presence of a gas in the gas,
the brain machine feature extraction network comprises a pulse neural network or a Hodgkin-huxley network.
8. A path planning apparatus, comprising:
the initial path generation module is used for responding to a path planning instruction, acquiring path planning data, identifying the path planning data and generating an initial path according to an acquired first identification result, wherein the path planning data comprises position data and/or voice data used for path planning, and the position data is acquired by acquiring a target position;
a path adjustment information obtaining module, configured to obtain brain-computer data used for adjusting the initial path, identify the brain-computer data, and obtain path adjustment information according to an obtained second identification result;
and the target path obtaining module is used for adjusting the initial path based on the path adjusting information to obtain a target path.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the path planning method of any of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a processor to perform a path planning method according to any one of claims 1-7 when executed.
CN202211356890.0A 2022-11-01 2022-11-01 Path planning method and device, electronic equipment and storage medium Pending CN115617046A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211356890.0A CN115617046A (en) 2022-11-01 2022-11-01 Path planning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211356890.0A CN115617046A (en) 2022-11-01 2022-11-01 Path planning method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115617046A true CN115617046A (en) 2023-01-17

Family

ID=84876927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211356890.0A Pending CN115617046A (en) 2022-11-01 2022-11-01 Path planning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115617046A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125925A (en) * 2016-06-20 2016-11-16 华南理工大学 Method is arrested based on gesture and voice-operated intelligence
CN108733059A (en) * 2018-06-05 2018-11-02 湖南荣乐科技有限公司 A kind of guide method and robot
CN109044651A (en) * 2018-06-09 2018-12-21 苏州大学 Method for controlling intelligent wheelchair and system based on natural gesture instruction in circumstances not known
CN109605385A (en) * 2018-11-28 2019-04-12 东南大学 A kind of rehabilitation auxiliary robot of mixing brain-computer interface driving
CN109976390A (en) * 2016-11-21 2019-07-05 清华大学深圳研究生院 A kind of robot for space remote control system based on three-dimension gesture
CN110398960A (en) * 2019-07-08 2019-11-01 浙江吉利汽车研究院有限公司 A kind of paths planning method of intelligent driving, device and equipment
CN111694428A (en) * 2020-05-25 2020-09-22 电子科技大学 Gesture and track remote control robot system based on Kinect

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125925A (en) * 2016-06-20 2016-11-16 华南理工大学 Method is arrested based on gesture and voice-operated intelligence
CN109976390A (en) * 2016-11-21 2019-07-05 清华大学深圳研究生院 A kind of robot for space remote control system based on three-dimension gesture
CN108733059A (en) * 2018-06-05 2018-11-02 湖南荣乐科技有限公司 A kind of guide method and robot
CN109044651A (en) * 2018-06-09 2018-12-21 苏州大学 Method for controlling intelligent wheelchair and system based on natural gesture instruction in circumstances not known
CN109605385A (en) * 2018-11-28 2019-04-12 东南大学 A kind of rehabilitation auxiliary robot of mixing brain-computer interface driving
CN110398960A (en) * 2019-07-08 2019-11-01 浙江吉利汽车研究院有限公司 A kind of paths planning method of intelligent driving, device and equipment
CN111694428A (en) * 2020-05-25 2020-09-22 电子科技大学 Gesture and track remote control robot system based on Kinect

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄志刚: ""读心"机器人竟然用脑电波和手势就能控制了", Retrieved from the Internet <URL:"读心"机器人竟然用脑电波和手势就能控制了! - 知乎 (zhihu.com)> *

Similar Documents

Publication Publication Date Title
US20210019215A1 (en) System and Method for Error Detection and Correction in Virtual Reality and Augmented Reality Environments
CN108961369B (en) Method and device for generating 3D animation
CN112614213B (en) Facial expression determining method, expression parameter determining model, medium and equipment
CN111860167B (en) Face fusion model acquisition method, face fusion model acquisition device and storage medium
CN113420719B (en) Method and device for generating motion capture data, electronic equipment and storage medium
CN113240778B (en) Method, device, electronic equipment and storage medium for generating virtual image
CN113177968A (en) Target tracking method and device, electronic equipment and storage medium
CN110675475A (en) Face model generation method, device, equipment and storage medium
US20220261516A1 (en) Computer vision and speech algorithm design service
CN113362263A (en) Method, apparatus, medium, and program product for changing the image of a virtual idol
CN114187624A (en) Image generation method, image generation device, electronic equipment and storage medium
CN112507833A (en) Face recognition and model training method, device, equipment and storage medium
CN113902956A (en) Training method of fusion model, image fusion method, device, equipment and medium
CN112580666A (en) Image feature extraction method, training method, device, electronic equipment and medium
Cherkasov et al. The use of open and machine vision technologies for development of gesture recognition intelligent systems
CN111523467A (en) Face tracking method and device
CN114399424A (en) Model training method and related equipment
CN114187392A (en) Virtual even image generation method and device and electronic equipment
CN113592932A (en) Training method and device for deep completion network, electronic equipment and storage medium
CN116092120B (en) Image-based action determining method and device, electronic equipment and storage medium
CN113052962A (en) Model training method, information output method, device, equipment and storage medium
CN111832611A (en) Training method, device and equipment of animal recognition model and storage medium
CN115617046A (en) Path planning method and device, electronic equipment and storage medium
CN113781653B (en) Object model generation method and device, electronic equipment and storage medium
CN112200169B (en) Method, apparatus, device and storage medium for training a model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination