CN115576359B - Unmanned cluster behavior control method and device based on visual perception and electronic equipment - Google Patents

Unmanned cluster behavior control method and device based on visual perception and electronic equipment Download PDF

Info

Publication number
CN115576359B
CN115576359B CN202211569284.7A CN202211569284A CN115576359B CN 115576359 B CN115576359 B CN 115576359B CN 202211569284 A CN202211569284 A CN 202211569284A CN 115576359 B CN115576359 B CN 115576359B
Authority
CN
China
Prior art keywords
current individual
individual
view
neighbor
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211569284.7A
Other languages
Chinese (zh)
Other versions
CN115576359A (en
Inventor
肖延东
齐景涛
白亮
魏迎梅
张华喜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202211569284.7A priority Critical patent/CN115576359B/en
Publication of CN115576359A publication Critical patent/CN115576359A/en
Application granted granted Critical
Publication of CN115576359B publication Critical patent/CN115576359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/104Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of image vision, in particular to a visual perception-based unmanned cluster behavior control method and device and electronic equipment. The method comprises the steps of obtaining visual perception information in a current individual vision field range, and processing the visual perception information to obtain first-order visual information and second-order visual information; and taking the first-order visual information and the second-order visual information as input, designing a speed decision equation by considering a self-driving item, a calibration item, a repulsion item and an attraction item of the current individual, and controlling the current individual according to the equation. In the group constructed by the method, the individual visually observes external information, the order of the group is realized through a calibration item input by second-order visual information, and the collision avoidance and aggregation of the group are realized through a repulsion item and an attraction item input by first-order visual information. The unmanned cluster constructed by the method does not need central control, and the individual makes a decision only by the visual information, so that the unmanned cluster can cope with the rejection environment of communication interference which cannot be coped with by the existing cluster method.

Description

Unmanned cluster behavior control method and device based on visual perception and electronic equipment
Technical Field
The present application relates to the field of image vision technologies, and in particular, to an unmanned cluster behavior control device based on visual perception, and an electronic apparatus.
Background
In recent years, unmanned cluster systems are widely applied to military and civil fields due to the characteristics of cost advantage, robustness self-healing, capability multiplication and the like. The existing group behavior generation method omits the information acquisition process of individuals, takes position and speed information based on communication as input to design interaction modes among the individuals, and the interaction modes all comprise three types of calibration response for realizing consistent group motion directions, rejection response for realizing collision avoidance among the individuals and attraction response for realizing group aggregation from the phenomenology of a biological group level. In the control algorithms, data transmission among individuals in the unmanned cluster depends on a network, and the data volume of data transmission and the network reliability can influence the real-time performance and stability of behavior control of the unmanned cluster.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an unmanned cluster behavior control based on visual perception, an apparatus and an electronic device. The method starts from the sensory neuroscience of the biological individual and adopts a visual perception mode to model the information acquisition of the biological individual, thereby replacing the original information acquisition method depending on communication. The unmanned cluster constructed by the method does not need central control, and the individual makes a decision only by means of visual information, so that the unmanned cluster can cope with the rejection environment of communication interference which cannot be coped with by the existing cluster method.
A method of unmanned cluster behavior control based on visual perception, the method comprising:
acquiring visual image information of other individuals in the 360-degree vision range of the current individual in the unmanned cluster under an individual reference coordinate system; the individual reference coordinate system is constructed by taking the speed direction of the current individual as the reference direction of the current individual and taking the anticlockwise direction as the positive direction of the relative direction.
And distinguishing different individuals according to the shielding relation among the individuals in the visual image information, and determining the visual perception function of the current individual.
Obtaining first-order visual information of the visual neighbors according to the visual perception function; the first order visual information includes: view occlusion angle and relative orientation.
And according to the relative orientation sequence of the visual neighbors in the visual perception function, taking a visual neighbor set with a locally maximum view shielding angle in the current individual view as a salient neighbor set.
And differentiating the first-order visual information of each salient neighbor of the current individual to obtain second-order visual information of each salient neighbor, wherein the second-order visual information comprises relative orientation change and view shielding angle change.
And determining the self-driving item of the current individual according to the speed of the current individual and the expected speed.
And determining the calibration item of the current individual according to the relative orientation, the relative orientation change and the view shielding angle change of each salient neighbor of the current individual.
And determining the repulsive item and the attractive item of the current individual according to the relative position and the view shielding angle of each salient neighbor of the current individual.
And determining a speed decision equation of the current individual according to the self-driving term, the calibration term, the repulsion term and the attraction term.
And determining the speed information of the current individual at the current moment according to the speed decision equation of the current individual.
And updating the motion state of the current individual according to the speed information at the current moment to obtain the position information of the current individual at the next moment.
An unmanned cluster behavior control device based on visual perception, the device comprising:
the visual image acquisition module is used for acquiring visual image information of other individuals in the 360-degree vision field range of the current individual in the unmanned cluster under the individual reference coordinate system; the individual reference coordinate system is constructed by taking the speed direction of the current individual as the reference direction of the current individual and taking the counterclockwise direction as the positive direction of the relative direction.
The first-order visual information determining module is used for distinguishing different individuals according to the shielding relation among the individuals in the visual image information and determining the visual perception function of the current individual; obtaining first-order visual information of the visual neighbors according to the visual perception function; the first order visual information includes: view obscuration angles and relative orientations.
The second-order visual information determining module is used for taking a visual neighbor set with a locally maximum visual field shielding angle in the current individual visual field as a salient neighbor set according to the relative azimuth sequence of the visual neighbors in the visual perception function; and differentiating the first-order visual information of each salient neighbor of the current individual to obtain the second-order visual information of each salient neighbor, wherein the second-order visual information comprises relative azimuth change and view shielding angle change.
The decision module is used for determining the self-driven item of the current individual according to the speed and the expected speed of the current individual; determining a calibration item of the current individual according to the relative position, the relative position change and the view shielding angle change of each highlighted neighbor of the current individual; determining a repulsion item and an attraction item of the current individual according to the relative position and the view shielding angle of each salient neighbor of the current individual; determining a speed decision equation of the current individual according to the self-driving item, the calibration item, the repulsion item and the attraction item; and determining the speed information of the current individual at the current moment according to the speed decision equation of the current individual.
And the driving module is used for updating the motion state of the current individual according to the speed information at the current moment to obtain the position information of the current individual at the next moment.
An electronic device comprising a memory storing a computer program and a processor implementing any of the above methods when the processor executes the computer program.
The unmanned cluster behavior control method and device based on visual perception and the electronic equipment comprise the following steps: the method comprises the steps that a current individual in an unmanned cluster acquires visual perception information in a visual field range, and the visual perception information is processed to obtain first-order visual information and second-order visual information; the method comprises the steps of taking first-order visual information and second-order visual information as input, designing a speed decision equation of a current individual by considering a self-driving item, a calibration item, a repulsion item and an attraction item of the individual, and controlling the current individual according to the decision equation. In the group constructed by the model design, the individual visually observes external information, the order of the group is realized through a calibration item input by second-order visual information, and the collision avoidance and the aggregation of the group are realized through a repulsion item and an attraction item input by first-order visual information. The unmanned cluster constructed by the method does not need central control, and the individual makes a decision only by means of visual information, so that the unmanned cluster can cope with the rejection environment of communication interference which cannot be coped with by the existing cluster method.
Drawings
FIG. 1 is a schematic flow chart of a method for controlling behavior of an unmanned cluster based on visual perception in one embodiment;
FIG. 2 is a schematic diagram of another embodiment in 2-dimensional space
Figure 712564DEST_PATH_IMAGE001
A schematic of a moving circular individual;
FIG. 3 is a schematic diagram of an individual reference coordinate system in another embodiment;
FIG. 4 is a visual perception function in another embodiment
Figure 576615DEST_PATH_IMAGE002
An acquired schematic diagram;
FIG. 5 is a diagram of visual perception functions in another embodiment
Figure 594249DEST_PATH_IMAGE003
A schematic diagram of (a);
FIG. 6 is a view occlusion angle sequence in another embodiment
Figure 151132DEST_PATH_IMAGE004
A schematic diagram of (a);
FIG. 7 is another embodiment of a neighbor
Figure 156783DEST_PATH_IMAGE005
Relative velocity parallel component of
Figure 824524DEST_PATH_IMAGE006
A schematic representation of the generated second order visual information;
FIG. 8 shows neighbors of different relative positions in another embodiment
Figure 696665DEST_PATH_IMAGE005
Relative velocity parallel component of
Figure 424450DEST_PATH_IMAGE007
The resulting relative orientation change;
FIG. 9 is a diagram of different relative position neighbors in another embodiment
Figure 162468DEST_PATH_IMAGE005
Relative velocity parallel component of
Figure 368321DEST_PATH_IMAGE008
The resulting view occlusion angle changes;
FIG. 10 shows neighbors of different relative orientations in another embodiment
Figure 360548DEST_PATH_IMAGE005
Relative velocity parallel component of
Figure 524813DEST_PATH_IMAGE009
The generated second-order visual information;
FIG. 11 is another embodiment of a neighbor
Figure 969701DEST_PATH_IMAGE005
Relative velocity vertical component of
Figure 494092DEST_PATH_IMAGE010
A schematic of the generated second order visual information;
FIG. 12 shows neighbors of different relative positions in another embodiment
Figure 606404DEST_PATH_IMAGE005
Relative velocity vertical component of
Figure 410412DEST_PATH_IMAGE011
The resulting relative orientation change;
FIG. 13 shows neighbors of different relative positions in another embodiment
Figure 608176DEST_PATH_IMAGE005
Relative velocity vertical component of
Figure 421411DEST_PATH_IMAGE012
The resulting view occlusion angle changes;
FIG. 14 shows neighbors of different relative orientations in another embodiment
Figure 637497DEST_PATH_IMAGE005
Relative velocity vertical component of
Figure 877986DEST_PATH_IMAGE013
The generated second-order visual information;
FIG. 15 is a schematic diagram illustrating view blocking angles at different distances in another embodiment;
FIG. 16 is a graph of different distance rejection term factors in another embodiment;
FIG. 17 is a graph of different distance attraction term factors in another embodiment;
FIG. 18 is a schematic illustration of the effect of the repulsive and attractive terms on the directional change in velocity of an individual in another embodiment;
FIG. 19 is a schematic illustration of the effect of the repulsive and attractive terms on the magnitude of the change in velocity of an individual in another embodiment;
FIG. 20 is a block diagram of an embodiment of an unmanned cluster behavior control device based on visual perception;
FIG. 21 is a diagram illustrating the internal architecture of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided a visual perception-based unmanned cluster behavior control method, comprising the steps of:
step 100: and acquiring visual image information of other individuals in the 360-degree visual field range of the current individual in the unmanned cluster under the individual reference coordinate system.
The individual reference coordinate system is the reference direction of the current individual with the speed direction of the current individual (
Figure 563045DEST_PATH_IMAGE014
) And constructing by taking the counterclockwise direction as the positive direction of the relative direction.
Specifically, the current individual uses the camera to acquire visual image information of other individuals within a 360-degree visual field range, and the visual image information is real-time visual field information intercepted by the camera.
The unmanned cluster may be a group composed of a plurality of unmanned aerial vehicles, or a group composed of a plurality of task robots, which is not specifically limited herein, and taking a task robot group as an example, the task robots include a current task robot and other task robots within a field of view of the current task robot; and the current task robot performs behavior control according to the obtained visual image information of other individuals in the 360-degree visual field range under the individual reference coordinate system. The task robot can be provided with a plurality of cameras for acquiring environment image information, and the cameras can be turned in each degree of freedom, so that the surrounding environment information can be acquired, and in addition, the cameras can be in a plurality and are respectively responsible for determining the visual information in the visual field range, and the visual image information in the visual field range of 360 degrees is acquired through splicing, so that various modes for acquiring the visual image information of other individuals in the visual field range exist, and the embodiment does not specifically limit the step of acquiring the visual image information.
This embodiment considers the location in 2-dimensional space
Figure 179971DEST_PATH_IMAGE015
Diameter of
Figure 1297DEST_PATH_IMAGE016
Is/are as follows
Figure 664884DEST_PATH_IMAGE017
A moving circular individual, a current individual velocity vector
Figure 837239DEST_PATH_IMAGE018
Can be decomposed into velocity magnitudes
Figure 726698DEST_PATH_IMAGE019
And direction of speed
Figure 668109DEST_PATH_IMAGE020
As shown in fig. 2.
In order to simulate the acquisition mode of the biological individual to the external information, the speed direction is constructed
Figure 250400DEST_PATH_IMAGE021
Is an individual
Figure 893740DEST_PATH_IMAGE023
Reference direction (a)
Figure 852468DEST_PATH_IMAGE024
) Positive counter-clockwise direction (
Figure 648386DEST_PATH_IMAGE025
) The individual reference frame is shown in fig. 3, d FB Indicating that the front-to-back distance is forward, d LR Indicating that the left-right distance is positive to the right. Under this coordinate system, define:
Figure 401579DEST_PATH_IMAGE026
wherein,
Figure 548526DEST_PATH_IMAGE027
and
Figure 29055DEST_PATH_IMAGE028
are respectively parallel and perpendicular to
Figure 679479DEST_PATH_IMAGE029
The unit vector of (2).
Step 102: and distinguishing different individuals according to the shielding relation among individuals in the visual image information, and determining the visual perception function of the current individual.
Specifically, as shown in fig. 4, under the individual reference coordinate system, the current individual isiAccording to the counterclockwise direction as the positive direction (
Figure 869152DEST_PATH_IMAGE030
) Obtaining the surrounding area centered around itself by adopting a visual perception mode
Figure 237816DEST_PATH_IMAGE031
Visual perception function representing occlusion or non-occlusion within range
Figure 522036DEST_PATH_IMAGE032
. As shown in fig. 5, individuals employ a higher order view of the living being, in which the individuals differentiate between individuals taking into account occlusion between the individuals.
Step 104: obtaining first-order visual information of the visual neighbors according to the visual perception function; the first order visual information includes: view occlusion angle and relative orientation.
Step 106: and according to the relative orientation sequence of the visual neighbors in the visual perception function, taking the visual neighbor set with the local maximum view shielding angle in the current individual view as a salient neighbor set.
In particular, the current individual is inspired by the attention mechanism of the biological individualiOnly focus on view blocking angle in visible neighborhood
Figure 292546DEST_PATH_IMAGE033
The local largest neighbor. According to
Figure 387541DEST_PATH_IMAGE034
The current individual of the visual perception function obtained in the process of time observation
Figure 243501DEST_PATH_IMAGE036
Obtaining a sequence of view occlusion angles in order of relative orientation of individuals in the view
Figure 82144DEST_PATH_IMAGE037
And identity sequence
Figure 972740DEST_PATH_IMAGE038
(as shown in fig. 6).
Step 108: and differentiating the first-order visual information of each salient neighbor of the current individual to obtain second-order visual information of each salient neighbor, wherein the second-order visual information comprises relative orientation change and view shielding angle change.
Specifically, the second-order visual information is the change of the view shielding angle and the change of the relative orientation, and reflects the relative position change between individuals.
Step 110: and determining the self-driving item of the current individual according to the speed of the current individual and the expected speed.
In particular, the current individual's self-driven term is used to size the current individual at a desired speed
Figure 219395DEST_PATH_IMAGE039
And (4) moving.
Step 112: and determining the calibration item of the current individual according to the relative position, the relative position change and the view shielding angle change of each salient neighbor of the current individual.
Specifically, the calibration terms are designed by eliminating second-order visual information, and the ordering of the population is realized through the calibration terms input by the second-order visual information.
The elimination of second-order visual information means that the relative position between individuals remains unchanged, i.e. agreement in the direction of motion is achieved between individuals.
Step 114: and determining the repulsive item and the attractive item of the current individual according to the relative position and the view shielding angle of each salient neighbor of the current individual.
Specifically, the collision avoidance and the aggregation of individuals in the unmanned aerial vehicle cluster are realized according to the repulsion item and the attraction item which are designed according to the first-order visual information.
Step 116: and determining a speed decision equation of the current individual according to the self-driving term, the calibration term, the repulsion term and the attraction term.
In particular, with second order visual information (view occlusion angle variation)
Figure 297072DEST_PATH_IMAGE040
And relative orientation change
Figure 204986DEST_PATH_IMAGE041
) And first order visual information (view occlusion angle)
Figure 684509DEST_PATH_IMAGE042
And relative orientation
Figure 636153DEST_PATH_IMAGE043
) As input, consider individual self-driven items
Figure 201126DEST_PATH_IMAGE044
Calibration term
Figure 115993DEST_PATH_IMAGE045
The term of exclusion
Figure 981181DEST_PATH_IMAGE046
And attraction items
Figure 588879DEST_PATH_IMAGE047
Four response modes, a speed decision equation of the current individual is defined as:
Figure 155996DEST_PATH_IMAGE048
step 118: and determining the speed information of the current individual at the current moment according to the speed decision equation of the current individual.
Specifically, in the decision making process, the current individual makes a decision on the speed and the speed direction according to the extracted visual information.
Step 120: and updating the motion state of the current individual according to the speed information at the current moment to obtain the position information of the current individual at the next moment.
Specifically, according to the speed information of the current individual at the current moment, the motion equation of the current individual is obtained as follows:
Figure 874553DEST_PATH_IMAGE049
(2)
wherein,
Figure 594248DEST_PATH_IMAGE050
is the current individual
Figure 372848DEST_PATH_IMAGE051
Is changed in the position of the movable body,
Figure 177993DEST_PATH_IMAGE052
is the current individual
Figure 949508DEST_PATH_IMAGE053
At the current momenttThe speed of (d);
and updating the motion state of the current individual according to the motion equation of the current individual to obtain the position information of the current individual at the next moment.
In the above method for controlling behavior of unmanned clusters based on visual perception, the method includes: the method comprises the steps that a current individual in an unmanned cluster acquires visual perception information in a visual field range, and the visual perception information is processed to obtain first-order visual information and second-order visual information; the method comprises the steps of taking first-order visual information and second-order visual information as input, designing a speed decision equation of a current individual by considering a self-driving item, a calibration item, a repulsion item and an attraction item of the individual, and controlling the current individual according to the decision equation. In the group constructed by the model design, the individual visually observes external information, the order of the group is realized by the calibration item input by the second-order visual information, and the collision avoidance and aggregation of the group are realized by the repulsion item and the attraction item input by the first-order visual information. The unmanned cluster constructed by the method does not need central control, and the individual makes a decision only by means of visual information, so that the unmanned cluster can cope with the rejection environment of communication interference which cannot be coped with by the existing cluster method.
In one embodiment, step 102 comprises: and distinguishing different individuals according to the shielding relation in the visual image information, setting the shielding position to be 1 and the non-shielding position to be 0, and obtaining the visual perception function of the current individual.
In one embodiment, step 106 includes: according to the visual perception function, the current individual obtains a view shielding angle sequence and an identity sequence according to the sequence of the relative positions of other individuals in the view. Specifically, the relative orientation order in the visual field is
Figure 258130DEST_PATH_IMAGE054
Of (2)
Figure 473211DEST_PATH_IMAGE055
Its view shielding angle
Figure 234493DEST_PATH_IMAGE056
And identity sequence
Figure 826012DEST_PATH_IMAGE057
. For the continuous azimuth interval which is not shielded in the visual field, the visual field is shielded by the angle
Figure 241337DEST_PATH_IMAGE058
And identity sequence
Figure 361740DEST_PATH_IMAGE059
. Defining individuals by the above two sequences
Figure 141477DEST_PATH_IMAGE060
View shielding angle in view
Figure 5528DEST_PATH_IMAGE061
Local maximum neighbor set, i.e. salient neighbor set
Figure 288742DEST_PATH_IMAGE062
In one embodiment, the second order visual information includes view occlusion angle changes and relative orientation changes; step 108 comprises: for is to
Figure 94892DEST_PATH_IMAGE063
Time of day and
Figure 830767DEST_PATH_IMAGE064
differentiating the first-order visual information of the common salient neighbors of the current individual at the moment to obtain second-order visual information:
Figure 232930DEST_PATH_IMAGE065
(3)
wherein,
Figure 636229DEST_PATH_IMAGE066
is a neighborjIn the current individualiThe angle of occlusion in the field of view varies,
Figure 364014DEST_PATH_IMAGE067
is a neighborjIn the current individualiThe relative orientation in the field of view varies,
Figure 836452DEST_PATH_IMAGE068
is a neighborjIn the current individualiThe angle of the occlusion in the field of view,
Figure 307885DEST_PATH_IMAGE069
is a neighborjIn the current individualiThe relative orientation in the field of view,
Figure 565691DEST_PATH_IMAGE070
in order to be a differential time interval,tis time.
In particular, the current individualiBy pairs
Figure 198798DEST_PATH_IMAGE071
Time of day and
Figure 909265DEST_PATH_IMAGE072
all being prominent neighbors
Figure 433656DEST_PATH_IMAGE073
One neighbor
Figure 545968DEST_PATH_IMAGE074
The difference of the front and back moments of the first-order visual information obtains the second-order visual information (the change of the view shielding angle)
Figure 615555DEST_PATH_IMAGE075
And relative orientation change
Figure 813319DEST_PATH_IMAGE076
)。
In one embodiment, step 110 comprises: according to the speed of the current individual and the expected speed, determining that the self-driving item of the current individual is as follows:
Figure 626554DEST_PATH_IMAGE077
(4)
wherein,
Figure 858952DEST_PATH_IMAGE078
in order to be a self-driven item,
Figure 80199DEST_PATH_IMAGE079
in order to be a self-driving constant,
Figure 765258DEST_PATH_IMAGE080
in order to be of the desired magnitude of the speed,
Figure 382185DEST_PATH_IMAGE081
is the current individual
Figure 203510DEST_PATH_IMAGE082
At the current time
Figure 598588DEST_PATH_IMAGE083
The velocity of (2).
Self-driving of linear function representation
Figure 770943DEST_PATH_IMAGE084
Make the individual in the desired speed size
Figure 925981DEST_PATH_IMAGE085
And (4) moving.
In one embodiment, step 112 includes: determining the calibration item of the current individual as follows according to the relative orientation, the relative orientation change and the view shielding angle change of each salient neighbor of the current individual:
Figure 867392DEST_PATH_IMAGE086
(5)
wherein,
Figure 449684DEST_PATH_IMAGE087
in order to be a calibration term for the calibration,
Figure 109335DEST_PATH_IMAGE088
is a neighborjIn the current individualiThe relative orientation in the field of view,
Figure 51752DEST_PATH_IMAGE089
is a neighborjIn the current individualiThe angle of occlusion in the field of view varies,
Figure 113249DEST_PATH_IMAGE090
is a neighborjIn the current individualiThe relative orientation in the field of view varies,
Figure 866441DEST_PATH_IMAGE091
is composed of
Figure 747810DEST_PATH_IMAGE092
Time of day and
Figure 244650DEST_PATH_IMAGE093
all times are current individualsiThe total number of neighbors is highlighted,
Figure 144342DEST_PATH_IMAGE094
Figure 334015DEST_PATH_IMAGE095
Figure 702679DEST_PATH_IMAGE096
and
Figure 3211DEST_PATH_IMAGE097
is a constant.
In particular, the current individual
Figure 508141DEST_PATH_IMAGE098
With second-order visual information (
Figure 120913DEST_PATH_IMAGE099
And
Figure 711294DEST_PATH_IMAGE100
) Calibration terms as inputs
Figure 549937DEST_PATH_IMAGE101
And (4) taking the analysis result of the crowd data as a heuristic to design. With the current individual
Figure 440533DEST_PATH_IMAGE053
As an observer, his visible neighbors
Figure 706429DEST_PATH_IMAGE102
Relative velocity of
Figure 298953DEST_PATH_IMAGE103
Can be decomposed into parallel to the speed
Figure 675708DEST_PATH_IMAGE104
Component (b) of
Figure 155231DEST_PATH_IMAGE105
And perpendicular to the velocity
Figure 857607DEST_PATH_IMAGE106
Component (b) of
Figure 688160DEST_PATH_IMAGE107
(as shown in fig. 3). Current individual
Figure 117873DEST_PATH_IMAGE108
If and neighbors
Figure 983061DEST_PATH_IMAGE109
The consensus on the magnitude of the velocities is achieved based on the parallel component
Figure 590760DEST_PATH_IMAGE110
The generated second-order visual information is subjected to speed variation according to the parallel component
Figure 643030DEST_PATH_IMAGE111
The generated second order visual information makes a velocity direction change. Neighbors of different relative positions
Figure 627166DEST_PATH_IMAGE112
Parallel component of
Figure 330549DEST_PATH_IMAGE113
The generated second-order visual information is shown in fig. 7-10, wherein fig. 7 is a neighbor
Figure 374728DEST_PATH_IMAGE114
Relative velocity parallel component of
Figure 648715DEST_PATH_IMAGE115
The resulting second order visual information, FIG. 8, is the different relative position neighbors
Figure 170963DEST_PATH_IMAGE116
Relative velocity parallel component of
Figure 745164DEST_PATH_IMAGE117
The resulting relative orientation change, FIG. 9 is the neighbor of different relative positions
Figure 960245DEST_PATH_IMAGE116
Relative velocity parallel component of
Figure 233444DEST_PATH_IMAGE118
The resulting view occlusion angle changes, FIG. 10 for neighbors of different relative orientations
Figure 293804DEST_PATH_IMAGE116
Relative velocity parallel component of
Figure 988091DEST_PATH_IMAGE119
The generated second order visual information.
Figure 108493DEST_PATH_IMAGE120
Can be based on relative position
Figure 622651DEST_PATH_IMAGE121
Decomposed into tangential and normal components, which respectively yield the current individual
Figure 735970DEST_PATH_IMAGE060
See neighbor
Figure 284763DEST_PATH_IMAGE116
Relative orientation changes and view occlusion changes. When the individual is
Figure 576067DEST_PATH_IMAGE116
Relative to the current individual
Figure 577521DEST_PATH_IMAGE060
In (b) direction of
Figure 245263DEST_PATH_IMAGE122
Is shown on the left side of
Figure 632250DEST_PATH_IMAGE123
) Or to the right (
Figure 360035DEST_PATH_IMAGE124
) The tangential component is the largest and the corresponding resulting relative orientation change is the largest, thus the current individual is subject to a change in velocity magnitude
Figure 583206DEST_PATH_IMAGE060
To be provided with
Figure 54639DEST_PATH_IMAGE125
Weighting the relative orientation changes; when in a relative orientation
Figure 296133DEST_PATH_IMAGE126
Is a front side (
Figure 194819DEST_PATH_IMAGE127
) Or a rear side (
Figure 905286DEST_PATH_IMAGE128
) The normal component is the largest and the corresponding resulting change in view occlusion is the largest, so the current individual is presented with a change in velocity magnitude
Figure 180409DEST_PATH_IMAGE060
To be provided with
Figure 27143DEST_PATH_IMAGE129
The view occlusion changes are weighted. In addition, to
Figure 348927DEST_PATH_IMAGE130
And with
Figure 546690DEST_PATH_IMAGE131
The same direction is taken as an example. When the neighbor is on the right side
Figure 359926DEST_PATH_IMAGE132
When the temperature of the water is higher than the set temperature,
Figure 61165DEST_PATH_IMAGE133
(ii) a And when the neighbor is on the left
Figure 567233DEST_PATH_IMAGE134
Time of flight
Figure 235981DEST_PATH_IMAGE135
(ii) a When the neighbor is located at the front side
Figure 852907DEST_PATH_IMAGE136
When the temperature of the water is higher than the set temperature,
Figure 674232DEST_PATH_IMAGE137
when the neighbor is located at the rear side
Figure 85622DEST_PATH_IMAGE138
Time of flight
Figure 257977DEST_PATH_IMAGE139
Current individual
Figure 662283DEST_PATH_IMAGE060
Should be accelerated to eliminate
Figure 603694DEST_PATH_IMAGE140
And
Figure 185985DEST_PATH_IMAGE141
therefore, the corresponding coefficients are all designed to be less than 0. Thus, the current individual
Figure 845636DEST_PATH_IMAGE060
To be provided with
Figure 538786DEST_PATH_IMAGE142
Second order visual information generated in response to parallel components, thereby relating to neighbors
Figure 600283DEST_PATH_IMAGE116
A consensus on the speed magnitude is achieved. Like the parallel component, neighbors of different relative orientations
Figure 337164DEST_PATH_IMAGE116
Perpendicular component of
Figure 218532DEST_PATH_IMAGE143
The generated second-order visual information is shown in FIGS. 11-14, in which FIG. 11 is a neighbor
Figure 980952DEST_PATH_IMAGE144
Relative velocity vertical component of
Figure 631376DEST_PATH_IMAGE145
Generated second order visual information, FIG. 12 is the different relative position neighbors
Figure 821049DEST_PATH_IMAGE146
Relative velocity vertical component of
Figure 193909DEST_PATH_IMAGE147
The resulting relative orientation change, FIG. 13 is the different relative position neighbors
Figure 228861DEST_PATH_IMAGE116
Relative velocity vertical component of
Figure 264950DEST_PATH_IMAGE148
Resulting in a change in the view occlusion angle, FIG. 14 is the neighborhood of different relative orientations
Figure 359945DEST_PATH_IMAGE116
Relative velocity vertical component of
Figure 215906DEST_PATH_IMAGE149
The resulting second order visual information. When in relative orientation
Figure 303817DEST_PATH_IMAGE150
Is a front side (
Figure 194412DEST_PATH_IMAGE151
) Or a rear side (
Figure 460308DEST_PATH_IMAGE152
) The tangential component is the largest and the corresponding resulting relative orientation change is the largest, so the current individual is changing in the direction of velocity
Figure 537986DEST_PATH_IMAGE060
To be provided with
Figure 180320DEST_PATH_IMAGE153
Weighting the relative orientation changes; when it is in a relative orientation
Figure 909110DEST_PATH_IMAGE154
Is shown on the left side of
Figure 611487DEST_PATH_IMAGE155
) Or to the right (
Figure 176461DEST_PATH_IMAGE156
) The normal component is the largest and the corresponding resulting change in view occlusion is the largest, so the current individual changes in the velocity direction
Figure 356906DEST_PATH_IMAGE060
To be provided with
Figure 940203DEST_PATH_IMAGE157
The view occlusion changes are weighted. In addition, to
Figure 813481DEST_PATH_IMAGE158
To the left as an example. When the neighbor is located on the right side
Figure 865751DEST_PATH_IMAGE159
When the utility model is used, the water is discharged,
Figure 115467DEST_PATH_IMAGE160
(ii) a When the neighbor is on the left
Figure 569582DEST_PATH_IMAGE161
Time-piece
Figure 348182DEST_PATH_IMAGE162
(ii) a When the neighbor is located at the front side
Figure 139945DEST_PATH_IMAGE163
When the temperature of the water is higher than the set temperature,
Figure 927772DEST_PATH_IMAGE164
when the neighbor is located at the rear side
Figure 236394DEST_PATH_IMAGE165
Time-piece
Figure 451475DEST_PATH_IMAGE166
Current individual
Figure 478336DEST_PATH_IMAGE060
Should be turned counterclockwise to eliminate
Figure 53543DEST_PATH_IMAGE167
And
Figure 216671DEST_PATH_IMAGE168
thus design according to
Figure 602653DEST_PATH_IMAGE169
Coefficient of term correspondence is less than 0, design
Figure 116811DEST_PATH_IMAGE170
The term corresponds to a coefficient greater than 0. Thus, the current individual
Figure 980862DEST_PATH_IMAGE060
To be provided with
Figure 778922DEST_PATH_IMAGE171
Second order visual information generated in response to parallel components, thereby relating to neighbors
Figure 70227DEST_PATH_IMAGE116
Consensus in the speed direction is achieved.
In one embodiment, step 114 comprises: determining the exclusion term of the current individual as follows according to the relative orientation and the view shielding angle of each salient neighbor of the current individual:
Figure 71681DEST_PATH_IMAGE172
(6)
wherein,
Figure 473843DEST_PATH_IMAGE173
in the case of the exclusive item or items,
Figure 877143DEST_PATH_IMAGE174
is a neighborjIn the current individualiThe relative orientation in the field of view,
Figure 588616DEST_PATH_IMAGE175
is a neighborjIn the current individualiThe angle of the occlusion in the field of view,
Figure 811786DEST_PATH_IMAGE176
Figure 283219DEST_PATH_IMAGE177
is a constant number of times, and is,
Figure 275446DEST_PATH_IMAGE178
is composed of
Figure 439711DEST_PATH_IMAGE179
Individual at the current momentiThe total number of neighbors is highlighted.
According to the relative position and the view shielding angle of each salient neighbor of the current individual, determining that the attraction item of the current individual is as follows:
Figure 396516DEST_PATH_IMAGE180
(7)
wherein:
Figure 140481DEST_PATH_IMAGE181
in order to attract the items,
Figure 252793DEST_PATH_IMAGE182
is a neighborjIn the current individualiThe relative orientation in the field of view,
Figure 322381DEST_PATH_IMAGE183
is a neighborjIn the current individualiThe angle of the occlusion in the field of view,
Figure 769411DEST_PATH_IMAGE184
Figure 582647DEST_PATH_IMAGE185
is a constant number of times, and is,
Figure 549466DEST_PATH_IMAGE186
is composed of
Figure 55533DEST_PATH_IMAGE179
Individual at presentiThe total number of neighbors is highlighted.
In particular, exclusive items
Figure 475013DEST_PATH_IMAGE187
And attraction items
Figure 75628DEST_PATH_IMAGE188
With first-order visual information (
Figure 162533DEST_PATH_IMAGE189
And
Figure 573922DEST_PATH_IMAGE190
) As an input. By relative orientation vector
Figure 746278DEST_PATH_IMAGE191
In that
Figure 635736DEST_PATH_IMAGE192
And
Figure 842727DEST_PATH_IMAGE193
the projections in the direction respectively form
Figure 674285DEST_PATH_IMAGE194
And
Figure 68358DEST_PATH_IMAGE195
by a factor of (a). In this way, projection is performed
Figure 27086DEST_PATH_IMAGE196
And
Figure 557425DEST_PATH_IMAGE197
respectively characterizing pairs of relative orientations
Figure 576196DEST_PATH_IMAGE198
And
Figure 709762DEST_PATH_IMAGE199
the influence of (c). The closer the distance between individuals, the wider the angle of the shielded region of the field of view, the
Figure 472182DEST_PATH_IMAGE200
The greater the demand (as shown in fig. 15 and 16). Thus, will
Figure 122606DEST_PATH_IMAGE201
Is arranged as
Figure 46699DEST_PATH_IMAGE202
One of which. On the contrary, by mixing
Figure 415364DEST_PATH_IMAGE203
Is arranged as
Figure 699584DEST_PATH_IMAGE204
To achieve a smaller angle of the occlusion region of the field of view the farther the distance between the individuals is, the
Figure 470093DEST_PATH_IMAGE205
The greater the demand (see fig. 15 and 17)Shown). Thus, factor
Figure 299509DEST_PATH_IMAGE206
And
Figure 421049DEST_PATH_IMAGE207
for characterizing pairs of relative distances
Figure 508960DEST_PATH_IMAGE208
And
Figure 868397DEST_PATH_IMAGE209
the influence of (c). For greater clarity, neighbors of different relative positions
Figure 134293DEST_PATH_IMAGE116
To pair
Figure 211970DEST_PATH_IMAGE210
The effect of (c) is shown in fig. 18-19. Neighbors in front (or behind)
Figure 119884DEST_PATH_IMAGE116
With the current individual
Figure 848674DEST_PATH_IMAGE060
When the distance is small, it generates repulsive force to make it present to the individual
Figure 551051DEST_PATH_IMAGE060
Deceleration (or acceleration). Conversely, the neighbors located at the front (rear)
Figure 381604DEST_PATH_IMAGE116
With individuals
Figure 562049DEST_PATH_IMAGE060
At greater distances, it creates an attractive force to present it to the individual
Figure 896079DEST_PATH_IMAGE060
Acceleration (or deceleration). Likewise, on the left (or right) sideNeighbor(s)
Figure 15694DEST_PATH_IMAGE116
With the current individual
Figure 67964DEST_PATH_IMAGE060
When the distance is small, it generates repulsive force to make it present to the individual
Figure 786521DEST_PATH_IMAGE060
Turning clockwise (or counterclockwise). In contrast, neighbors located to the left (or right)
Figure 506216DEST_PATH_IMAGE116
With the current individual
Figure 284816DEST_PATH_IMAGE060
At greater distances, it creates an attractive force to present it to the individual
Figure 339228DEST_PATH_IMAGE060
Turning counterclockwise (or clockwise).
In one embodiment, in the field of mechanics, the instantaneous change in velocity of an object is determined by the tangential force, while the instantaneous change in direction of the object is determined by the normal force. The current individual speed decision equation includes: a current individual speed magnitude decision equation and a direction change decision equation; step 116 includes: projecting the self-driven item, the calibration item, the repulsion item and the attraction item to the direction parallel to the current individual speed direction and the direction vertical to the current individual speed direction to obtain a speed size decision equation and a direction change decision equation of the current individual; the current individual speed size decision equation and direction change decision equation are respectively as follows:
Figure 861477DEST_PATH_IMAGE211
(8)
Figure 170098DEST_PATH_IMAGE212
(9)
wherein,
Figure 385179DEST_PATH_IMAGE213
is the current individualiAt the current timetThe size of the speed of the motor is changed,
Figure 146462DEST_PATH_IMAGE214
is the current individualiAt the current timetIs detected by the speed of the motor, and,
Figure 987247DEST_PATH_IMAGE215
in order to be a self-driven item,
Figure 150376DEST_PATH_IMAGE216
in order to be a calibration term, the calibration term,
Figure 536358DEST_PATH_IMAGE217
in the interest of the term being exclusive,
Figure 50515DEST_PATH_IMAGE218
in order to attract the items,
Figure 914566DEST_PATH_IMAGE219
is a neighborjIn the current individualiThe angle of the occlusion in the field of view,
Figure 447048DEST_PATH_IMAGE220
is a neighborjIn the current individualiThe relative orientation in the field of view,
Figure 3931DEST_PATH_IMAGE221
is a neighborjIn the current individualiThe angle of occlusion in the field of view varies,
Figure 5385DEST_PATH_IMAGE222
is a neighborjIn the current individualiThe relative orientation in the field of view changes,
Figure 673127DEST_PATH_IMAGE223
and
Figure 545268DEST_PATH_IMAGE224
parallel and perpendicular to the current individual, respectively
Figure 525250DEST_PATH_IMAGE060
At the current time
Figure 748421DEST_PATH_IMAGE179
Speed of
Figure 219853DEST_PATH_IMAGE225
The unit vector of (2).
Substituting the expressions (4) to (7) into the expressions (8) and (9) to obtain a speed magnitude decision equation and a direction change decision equation of the current individual, wherein the speed magnitude decision equation and the direction change decision equation are respectively as follows:
Figure 212080DEST_PATH_IMAGE226
(10)
Figure 360033DEST_PATH_IMAGE227
(11)
wherein,
Figure 70500DEST_PATH_IMAGE228
is the current individual
Figure 80045DEST_PATH_IMAGE060
At the current time
Figure 192357DEST_PATH_IMAGE179
The magnitude of the velocity of (a) is,
Figure 261944DEST_PATH_IMAGE229
in order to be a self-driving constant,
Figure 708975DEST_PATH_IMAGE230
in order to be of the desired magnitude of the speed,
Figure 256631DEST_PATH_IMAGE231
is composed of
Figure 489029DEST_PATH_IMAGE179
Time of day and
Figure 463939DEST_PATH_IMAGE232
all times are current individualsiThe total number of neighbors is highlighted,
Figure 148998DEST_PATH_IMAGE233
is composed of
Figure 15192DEST_PATH_IMAGE179
Individual at presentiHighlighting the total number of neighbors.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 20, there is provided a visual perception-based unmanned cluster behavior control apparatus, including: the device comprises a visual image acquisition module, a first-order visual information determination module, a second-order visual information determination module, a decision module and a driving module, wherein:
the visual image acquisition module is used for acquiring visual image information of other individuals in the 360-degree view range of the current individual in the unmanned cluster under the individual reference coordinate system; the individual reference coordinate system is constructed by taking the speed direction of the current individual as the reference direction of the current individual and taking the anticlockwise direction as the positive direction of the relative direction.
The first-order visual information determining module is used for distinguishing different individuals according to the shielding relation among the individuals in the visual image information and determining the visual perception function of the current individual; obtaining first-order visual information of the visual neighbors according to the visual perception function; the first order visual information includes: view obscuration angles and relative orientations.
The second-order visual information determining module is used for taking a visual neighbor set with a local maximum view shielding angle in the current individual view as a highlight neighbor set according to the relative orientation sequence of the visual neighbors in the visual perception function; and differentiating the first-order visual information of each salient neighbor of the current individual to obtain second-order visual information of each salient neighbor, wherein the second-order visual information comprises relative orientation change and view shielding angle change.
The decision module is used for determining the self-driven item of the current individual according to the speed and the expected speed of the current individual; determining a calibration item of the current individual according to the relative orientation, the relative orientation change and the view shielding angle change of each salient neighbor of the current individual; determining a repulsion item and an attraction item of the current individual according to the relative position and the view shielding angle of each prominent neighbor of the current individual; determining a speed decision equation of the current individual according to the self-driving term, the calibration term, the repulsion term and the attraction term; and determining the speed information of the current individual at the current moment according to the speed decision equation of the current individual.
And the driving module is used for updating the motion state of the current individual according to the speed information at the current moment to obtain the position information of the current individual at the next moment.
In one embodiment, the first-order visual information determining module is further configured to distinguish different individuals according to the occlusion relationship in the visual image information, set an occlusion position to be 1, and set an non-occlusion position to be 0, so as to obtain the visual perception function of the current individual.
In one embodiment, the second-order visual information determining module is further configured to obtain, according to the visual perception function, a view occlusion angle sequence and an identity sequence of the current individual according to the order of relative positions of other individuals in the view; wherein for the current individual viewThe relative orientation sequence in the field is
Figure 836517DEST_PATH_IMAGE234
Of (2)
Figure 513486DEST_PATH_IMAGE116
Of the sequence of view occlusion angles
Figure 420262DEST_PATH_IMAGE235
And element values of identity sequences
Figure 309721DEST_PATH_IMAGE236
(ii) a For continuous azimuth interval which is not occluded in the current individual view field, the values of elements of view occlusion angle sequence
Figure 763049DEST_PATH_IMAGE237
And element values of identity sequences
Figure 79761DEST_PATH_IMAGE238
(ii) a And taking the visible neighbor set with the local maximum view shielding angle in the current individual view as the highlighted neighbor set.
In one embodiment, the second order visual information includes a view occlusion angle change and a relative orientation change; second order visual information determination module, and is also used for comparing
Figure 739412DEST_PATH_IMAGE179
Time of day and
Figure 432562DEST_PATH_IMAGE239
the current individual at the moment highlights the first-order visual information of the neighbors together for difference, and the expression of the obtained second-order visual information is shown as a formula (3).
In one embodiment, the decision module is further configured to determine the self-driving term of the current individual according to the speed of the current individual and the desired speed, as shown in equation (4).
In one embodiment, the decision module is further configured to determine a calibration term of the current individual according to the relative orientation, the relative orientation change, and the view occlusion angle change of each salient neighbor of the current individual, as shown in equation (5).
In one embodiment, the decision module is further configured to determine, according to the relative orientation and the view occlusion angle of each salient neighbor of the current individual, an exclusion term of the current individual as shown in equation (6); and determining the attraction term of the current individual as shown in the formula (7) according to the relative position and the view shielding angle of each salient neighbor of the current individual.
In one embodiment, the current individual velocity decision equation comprises: a current individual speed magnitude decision equation and a direction change decision equation; the decision module is also used for projecting the self-driven item, the calibration item, the repulsion item and the attraction item to the direction parallel to the current individual speed direction and the direction vertical to the current individual speed direction to obtain a speed size decision equation and a direction change decision equation of the current individual; the speed size decision equation and the direction change decision equation of the current individual are respectively shown as a formula (10) and a formula (11).
For specific limitations of the unmanned collective behavior control device based on visual perception, reference may be made to the above limitations of the unmanned collective behavior control method based on visual perception, and details are not repeated here. The modules in the above-mentioned unmanned clustered behavior control device based on visual perception may be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, an electronic device is provided, which may be a terminal, and an internal structure thereof may be as shown in fig. 21. The electronic device comprises a processor, a memory, a network interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the electronic device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a method for controlling behavior of an unmanned cluster based on visual perception. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configuration shown in fig. 21 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, an electronic device is provided, comprising a memory storing a computer program and a processor, which when executing the computer program, performs the steps of the method of the above-mentioned method embodiments.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (10)

1. A method for unmanned cluster behavior control based on visual perception, the method comprising:
acquiring visual image information of other individuals in the 360-degree vision range of the current individual in the unmanned cluster under an individual reference coordinate system; the individual reference coordinate system is constructed by taking the speed direction of the current individual as the reference direction of the current individual and taking the anticlockwise direction as the positive direction of the relative direction;
according to the shielding relation among individuals in the visual image information, distinguishing different individuals and determining the visual perception function of the current individual;
obtaining first-order visual information of the visual neighbors according to the visual perception function; the first order visual information includes: view occlusion angle and relative orientation;
according to the relative orientation sequence of the visual neighbors in the visual perception function, taking a visual neighbor set with a locally maximum view shielding angle in the current individual view as a salient neighbor set;
differentiating the first-order visual information of each salient neighbor of the current individual to obtain second-order visual information of each salient neighbor, wherein the second-order visual information comprises relative orientation change and view shielding angle change;
determining the self-driving item of the current individual according to the speed and the expected speed of the current individual;
determining a calibration item of the current individual according to the relative orientation, the relative orientation change and the view shielding angle change of each salient neighbor of the current individual;
determining a repulsion item and an attraction item of the current individual according to the relative position and the view shielding angle of each salient neighbor of the current individual;
determining a speed decision equation of the current individual according to the self-driving term, the calibration term, the repulsion term and the attraction term;
determining the speed information of the current individual at the current moment according to the speed decision equation of the current individual;
and updating the motion state of the current individual according to the speed information at the current moment to obtain the position information of the current individual at the next moment.
2. The method according to claim 1, wherein determining the visual perception function of the current individual according to the occlusion relationship between individuals in the visual image information and distinguishing between different individuals comprises:
and distinguishing different individuals according to the shielding relation in the visual image information, setting the shielding position as 1 and the non-shielding position as 0, and obtaining the visual perception function of the current individual.
3. The method according to claim 1, wherein the step of regarding the set of visible neighbors with the locally largest view occlusion angle in the current individual view as the set of prominent neighbors according to the relative orientation order of the visible neighbors in the visual perception function comprises:
according to the visual perception function, the current individual obtains a view shielding angle sequence and an identity sequence according to the sequence of the opposite positions of other individuals in the view; wherein, the relative orientation sequence in the current individual visual field is
Figure QLYQS_1
Of (2)
Figure QLYQS_2
Element values of the sequence of view occlusion angles
Figure QLYQS_3
And element values of identity sequences
Figure QLYQS_4
iIs the serial number of the current individual,tas a matter of time, the time is,
Figure QLYQS_5
is a neighborjIn the current individualiOcclusion angle in the field of view; for continuous azimuth intervals which are not occluded in the current individual view, the elements of the view occlusion angle sequenceValue of
Figure QLYQS_6
And the value of an element of the identity sequence
Figure QLYQS_7
And taking the visible neighbor set with the local maximum view shielding angle in the current individual view as the highlighted neighbor set.
4. The method of claim 1, wherein differentiating the first order visual information of each prominent neighbor of the current individual to obtain second order visual information comprises:
to pair
Figure QLYQS_8
Time of day and
Figure QLYQS_9
differentiating the first-order visual information of the common salient neighbors of the current individual at the moment to obtain second-order visual information:
Figure QLYQS_10
,
wherein,
Figure QLYQS_11
is a neighborjIn the current individualiThe angle of occlusion in the field of view varies,
Figure QLYQS_12
is a neighborjIn the current individualiThe relative orientation in the field of view varies,
Figure QLYQS_13
is a neighborjIn the current individualiThe angle of the occlusion in the field of view,
Figure QLYQS_14
is a neighborjIn the current individualiThe relative orientation in the field of view,
Figure QLYQS_15
in order to differentiate the time intervals between the two,tis time.
5. The method of claim 1, wherein determining the self-propelled term for the current individual based on the speed of the current individual and the desired speed comprises:
according to the speed of the current individual and the expected speed, determining that the self-driving item of the current individual is as follows:
Figure QLYQS_16
,
wherein,
Figure QLYQS_17
in order to be a self-driven item,
Figure QLYQS_18
is a constant for self-driving and is,
Figure QLYQS_19
in order to be of the desired magnitude of the velocity,
Figure QLYQS_20
is the current individual
Figure QLYQS_21
At the current time
Figure QLYQS_22
The velocity of (2).
6. The method of claim 1, wherein determining the calibration term for the current individual based on the relative orientation, the change in relative orientation, and the change in view occlusion angle for each salient neighbor of the current individual comprises:
determining the calibration item of the current individual as follows according to the relative orientation, the relative orientation change and the view shielding angle change of each salient neighbor of the current individual:
Figure QLYQS_23
,
wherein,
Figure QLYQS_25
in order to be a calibration term for the calibration,
Figure QLYQS_28
is a neighborjIn the current individualiThe relative orientation in the field of view,
Figure QLYQS_31
is a neighborjIn the current individualiThe angle of occlusion in the field of view varies,
Figure QLYQS_26
is a neighborjIn the current individualiThe relative orientation in the field of view varies,
Figure QLYQS_29
is composed of
Figure QLYQS_32
Time of day and
Figure QLYQS_34
all the time are current individualsiThe total number of neighbors is highlighted,
Figure QLYQS_24
Figure QLYQS_27
Figure QLYQS_30
and
Figure QLYQS_33
is a constant.
7. The method of claim 1, wherein determining the repulsive term and attractive term for the current individual based on the relative orientation and view occlusion angle of each prominent neighbor of the current individual comprises:
determining the exclusion term of the current individual as follows according to the relative orientation and the view shielding angle of each salient neighbor of the current individual:
Figure QLYQS_35
,
wherein,
Figure QLYQS_36
in the case of the exclusive item or items,
Figure QLYQS_37
is a neighborjIn the current individualiThe relative orientation in the field of view,
Figure QLYQS_38
is a neighborjIn the current individualiThe angle of the occlusion in the field of view,
Figure QLYQS_39
Figure QLYQS_40
is a constant number of times, and is,
Figure QLYQS_41
is composed of
Figure QLYQS_42
Individual at the current momentiHighlighting the total number of neighbors;
according to the relative position and the view shielding angle of each salient neighbor of the current individual, determining that the attraction item of the current individual is as follows:
Figure QLYQS_43
,
wherein:
Figure QLYQS_44
in order to attract the terms to the user,
Figure QLYQS_45
is a neighborjIn the current individualiThe relative orientation in the field of view,
Figure QLYQS_46
is a neighborjIn the current individualiThe angle of the occlusion in the field of view,
Figure QLYQS_47
Figure QLYQS_48
is a constant number of times, and is,
Figure QLYQS_49
is composed of
Figure QLYQS_50
Individual at presentiHighlighting the total number of neighbors.
8. The method of claim 1, wherein the current individual velocity decision equation comprises: a current individual speed magnitude decision equation and a direction change decision equation;
determining a speed decision equation for the current individual based on the self-driving term, the calibration term, the repulsion term, and the attraction term, including:
projecting the self-driving item, the calibration item, the repulsion item and the attraction item to a direction parallel to the current individual speed direction and a direction perpendicular to the current individual speed direction to obtain a speed size decision equation and a direction change decision equation of the current individual; the current individual speed size decision equation and direction change decision equation are respectively as follows:
Figure QLYQS_51
,
Figure QLYQS_52
,
wherein,
Figure QLYQS_54
is the current individualiAt the current timetThe magnitude of the speed of the motor is varied,
Figure QLYQS_60
is the current individualiAt the current timetIs detected by the speed of the motor, and,
Figure QLYQS_64
in order to be a self-driven item,
Figure QLYQS_56
in order to be a calibration term, the calibration term,
Figure QLYQS_59
in the case of the exclusive item or items,
Figure QLYQS_63
in order to attract the items,
Figure QLYQS_67
is a neighborjIn the current individualiThe angle of the occlusion in the field of view,
Figure QLYQS_53
is a neighborjIn the current individualiThe relative orientation in the field of view,
Figure QLYQS_58
is a neighborjIn the current individualiThe angle of occlusion in the field of view varies,
Figure QLYQS_62
is a neighborjIn the current individualiThe relative orientation in the field of view changes,
Figure QLYQS_66
and
Figure QLYQS_55
parallel and perpendicular to the current individual, respectively
Figure QLYQS_57
At the current time
Figure QLYQS_61
Speed of (2)
Figure QLYQS_65
The unit vector of (2).
9. An unmanned cluster behavior control device based on visual perception, the device comprising:
the visual image acquisition module is used for acquiring visual image information of other individuals in the 360-degree vision field range of the current individual in the unmanned cluster under the individual reference coordinate system; the individual reference coordinate system is constructed by taking the speed direction of the current individual as the reference direction of the current individual and taking the anticlockwise direction as the positive direction of the relative direction;
the first-order visual information determining module is used for distinguishing different individuals according to the shielding relation among the individuals in the visual image information and determining the visual perception function of the current individual; obtaining first-order visual information of the visual neighbors according to the visual perception function; the first order visual information includes: view occlusion angle and relative orientation;
the second-order visual information determining module is used for taking a visual neighbor set with a locally maximum visual field shielding angle in the current individual visual field as a salient neighbor set according to the relative azimuth sequence of the visual neighbors in the visual perception function; differentiating the first-order visual information of each salient neighbor of the current individual to obtain second-order visual information of each salient neighbor, wherein the second-order visual information comprises relative orientation change and view shielding angle change;
the decision module is used for determining the self-driven item of the current individual according to the speed and the expected speed of the current individual; determining a calibration item of the current individual according to the relative orientation, the relative orientation change and the view shielding angle change of each salient neighbor of the current individual; determining a repulsion item and an attraction item of the current individual according to the relative position and the view shielding angle of each salient neighbor of the current individual; determining a speed decision equation of the current individual according to the self-driving term, the calibration term, the repulsion term and the attraction term; determining the speed information of the current individual at the current moment according to the speed decision equation of the current individual;
and the driving module is used for updating the motion state of the current individual according to the speed information at the current moment to obtain the position information of the current individual at the next moment.
10. An electronic device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the method of any one of claims 1 to 8 when executing the computer program.
CN202211569284.7A 2022-12-08 2022-12-08 Unmanned cluster behavior control method and device based on visual perception and electronic equipment Active CN115576359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211569284.7A CN115576359B (en) 2022-12-08 2022-12-08 Unmanned cluster behavior control method and device based on visual perception and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211569284.7A CN115576359B (en) 2022-12-08 2022-12-08 Unmanned cluster behavior control method and device based on visual perception and electronic equipment

Publications (2)

Publication Number Publication Date
CN115576359A CN115576359A (en) 2023-01-06
CN115576359B true CN115576359B (en) 2023-03-07

Family

ID=84590805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211569284.7A Active CN115576359B (en) 2022-12-08 2022-12-08 Unmanned cluster behavior control method and device based on visual perception and electronic equipment

Country Status (1)

Country Link
CN (1) CN115576359B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287829A (en) * 2019-06-12 2019-09-27 河海大学 A kind of video face identification method of combination depth Q study and attention model
CN112001937A (en) * 2020-09-07 2020-11-27 中国人民解放军国防科技大学 Group chasing and escaping method and device based on field-of-view perception
CN115202392A (en) * 2022-07-11 2022-10-18 中国人民解放军国防科技大学 Group adaptive behavior control method, device and equipment based on visual perception

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110043537A1 (en) * 2009-08-20 2011-02-24 University Of Washington Visual distortion in a virtual environment to alter or guide path movement
CN111515950B (en) * 2020-04-28 2022-04-08 腾讯科技(深圳)有限公司 Method, device and equipment for determining transformation relation of robot coordinate system and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287829A (en) * 2019-06-12 2019-09-27 河海大学 A kind of video face identification method of combination depth Q study and attention model
CN112001937A (en) * 2020-09-07 2020-11-27 中国人民解放军国防科技大学 Group chasing and escaping method and device based on field-of-view perception
CN115202392A (en) * 2022-07-11 2022-10-18 中国人民解放军国防科技大学 Group adaptive behavior control method, device and equipment based on visual perception

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Behavioral and neural markers of visual configural processing in social scene perception;Etienne AbassiLiuba Papeo;《NeuroImage》;全文 *
Jingtao Qi ; Liang Bai ; Yandong Xiao ; Wansen Wu ; Lu Liu.Group Chase and Escape of Biological Groups Based on a Visual Perception-Decision-Propulsion Model.《IEEE Access》.2020, *
The emergence of collective obstacle avoidance based on a visual perception mechanism;Jingtao Qi,Liang Bai,Yandong Xiao,Yingmei Wei,Wansen Wu;《Information Sciences》;全文 *

Also Published As

Publication number Publication date
CN115576359A (en) 2023-01-06

Similar Documents

Publication Publication Date Title
EP3674852B1 (en) Method and apparatus with gaze estimation
CN106687886B (en) Three-dimensional hybrid reality viewport
CN111819568B (en) Face rotation image generation method and device
JP7562504B2 (en) Deep Predictor Recurrent Neural Networks for Head Pose Prediction
Simoncelli Distributed representation and analysis of visual motion
Ong et al. A Bayesian filter for multi-view 3D multi-object tracking with occlusion handling
Kothari et al. Weakly-supervised physically unconstrained gaze estimation
Sudderth et al. Nonparametric belief propagation
WO2018155897A1 (en) Screen control method and device for virtual reality service
EP2614413B1 (en) Method and apparatus for object tracking and recognition
EP2405393B1 (en) Device, method and program for creating information for object position estimation
JP2015133691A (en) Imaging apparatus, image processing system, imaging method and recording medium
KR20200079170A (en) Gaze estimation method and gaze estimation apparatus
CN106662924A (en) Mouse sharing between a desktop and a virtual world
CN112381707B (en) Image generation method, device, equipment and storage medium
Morerio et al. Hand detection in first person vision
Zeng et al. Pixel modeling using histograms based on fuzzy partitions for dynamic background subtraction
US20230005220A1 (en) Shooting method, shooting instruction method, shooting device, and shooting instruction device
Marcus et al. An eye on visual sensor networks
Natarajan et al. Scalable decision-theoretic coordination and control for real-time active multi-camera surveillance
CN115576359B (en) Unmanned cluster behavior control method and device based on visual perception and electronic equipment
CN106127119B (en) Joint probabilistic data association method based on color image and depth image multiple features
Shi et al. I understand you: Blind 3d human attention inference from the perspective of third-person
Rossi et al. A new challenge: Behavioural analysis of 6-DoF user when consuming immersive media
EP3529776B1 (en) Method, device, and system for processing multimedia signal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant