CN115576359B - Unmanned cluster behavior control method and device based on visual perception and electronic equipment - Google Patents
Unmanned cluster behavior control method and device based on visual perception and electronic equipment Download PDFInfo
- Publication number
- CN115576359B CN115576359B CN202211569284.7A CN202211569284A CN115576359B CN 115576359 B CN115576359 B CN 115576359B CN 202211569284 A CN202211569284 A CN 202211569284A CN 115576359 B CN115576359 B CN 115576359B
- Authority
- CN
- China
- Prior art keywords
- current individual
- individual
- view
- neighbor
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000016776 visual perception Effects 0.000 title claims abstract description 56
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000000007 visual effect Effects 0.000 claims abstract description 161
- 230000004438 eyesight Effects 0.000 claims abstract description 7
- 230000008859 change Effects 0.000 claims description 67
- 230000006870 function Effects 0.000 claims description 30
- 230000006399 behavior Effects 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 9
- 230000007717 exclusion Effects 0.000 claims description 3
- QVRVXSZKCXFBTE-UHFFFAOYSA-N n-[4-(6,7-dimethoxy-3,4-dihydro-1h-isoquinolin-2-yl)butyl]-2-(2-fluoroethoxy)-5-methylbenzamide Chemical compound C1C=2C=C(OC)C(OC)=CC=2CCN1CCCCNC(=O)C1=CC(C)=CC=C1OCCF QVRVXSZKCXFBTE-UHFFFAOYSA-N 0.000 claims description 3
- 238000004891 communication Methods 0.000 abstract description 6
- 230000002776 aggregation Effects 0.000 abstract description 5
- 238000004220 aggregation Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 11
- 230000004044 response Effects 0.000 description 6
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 208000006440 Open Bite Diseases 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
- G05D1/104—Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The application relates to the technical field of image vision, in particular to a visual perception-based unmanned cluster behavior control method and device and electronic equipment. The method comprises the steps of obtaining visual perception information in a current individual vision field range, and processing the visual perception information to obtain first-order visual information and second-order visual information; and taking the first-order visual information and the second-order visual information as input, designing a speed decision equation by considering a self-driving item, a calibration item, a repulsion item and an attraction item of the current individual, and controlling the current individual according to the equation. In the group constructed by the method, the individual visually observes external information, the order of the group is realized through a calibration item input by second-order visual information, and the collision avoidance and aggregation of the group are realized through a repulsion item and an attraction item input by first-order visual information. The unmanned cluster constructed by the method does not need central control, and the individual makes a decision only by the visual information, so that the unmanned cluster can cope with the rejection environment of communication interference which cannot be coped with by the existing cluster method.
Description
Technical Field
The present application relates to the field of image vision technologies, and in particular, to an unmanned cluster behavior control device based on visual perception, and an electronic apparatus.
Background
In recent years, unmanned cluster systems are widely applied to military and civil fields due to the characteristics of cost advantage, robustness self-healing, capability multiplication and the like. The existing group behavior generation method omits the information acquisition process of individuals, takes position and speed information based on communication as input to design interaction modes among the individuals, and the interaction modes all comprise three types of calibration response for realizing consistent group motion directions, rejection response for realizing collision avoidance among the individuals and attraction response for realizing group aggregation from the phenomenology of a biological group level. In the control algorithms, data transmission among individuals in the unmanned cluster depends on a network, and the data volume of data transmission and the network reliability can influence the real-time performance and stability of behavior control of the unmanned cluster.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an unmanned cluster behavior control based on visual perception, an apparatus and an electronic device. The method starts from the sensory neuroscience of the biological individual and adopts a visual perception mode to model the information acquisition of the biological individual, thereby replacing the original information acquisition method depending on communication. The unmanned cluster constructed by the method does not need central control, and the individual makes a decision only by means of visual information, so that the unmanned cluster can cope with the rejection environment of communication interference which cannot be coped with by the existing cluster method.
A method of unmanned cluster behavior control based on visual perception, the method comprising:
acquiring visual image information of other individuals in the 360-degree vision range of the current individual in the unmanned cluster under an individual reference coordinate system; the individual reference coordinate system is constructed by taking the speed direction of the current individual as the reference direction of the current individual and taking the anticlockwise direction as the positive direction of the relative direction.
And distinguishing different individuals according to the shielding relation among the individuals in the visual image information, and determining the visual perception function of the current individual.
Obtaining first-order visual information of the visual neighbors according to the visual perception function; the first order visual information includes: view occlusion angle and relative orientation.
And according to the relative orientation sequence of the visual neighbors in the visual perception function, taking a visual neighbor set with a locally maximum view shielding angle in the current individual view as a salient neighbor set.
And differentiating the first-order visual information of each salient neighbor of the current individual to obtain second-order visual information of each salient neighbor, wherein the second-order visual information comprises relative orientation change and view shielding angle change.
And determining the self-driving item of the current individual according to the speed of the current individual and the expected speed.
And determining the calibration item of the current individual according to the relative orientation, the relative orientation change and the view shielding angle change of each salient neighbor of the current individual.
And determining the repulsive item and the attractive item of the current individual according to the relative position and the view shielding angle of each salient neighbor of the current individual.
And determining a speed decision equation of the current individual according to the self-driving term, the calibration term, the repulsion term and the attraction term.
And determining the speed information of the current individual at the current moment according to the speed decision equation of the current individual.
And updating the motion state of the current individual according to the speed information at the current moment to obtain the position information of the current individual at the next moment.
An unmanned cluster behavior control device based on visual perception, the device comprising:
the visual image acquisition module is used for acquiring visual image information of other individuals in the 360-degree vision field range of the current individual in the unmanned cluster under the individual reference coordinate system; the individual reference coordinate system is constructed by taking the speed direction of the current individual as the reference direction of the current individual and taking the counterclockwise direction as the positive direction of the relative direction.
The first-order visual information determining module is used for distinguishing different individuals according to the shielding relation among the individuals in the visual image information and determining the visual perception function of the current individual; obtaining first-order visual information of the visual neighbors according to the visual perception function; the first order visual information includes: view obscuration angles and relative orientations.
The second-order visual information determining module is used for taking a visual neighbor set with a locally maximum visual field shielding angle in the current individual visual field as a salient neighbor set according to the relative azimuth sequence of the visual neighbors in the visual perception function; and differentiating the first-order visual information of each salient neighbor of the current individual to obtain the second-order visual information of each salient neighbor, wherein the second-order visual information comprises relative azimuth change and view shielding angle change.
The decision module is used for determining the self-driven item of the current individual according to the speed and the expected speed of the current individual; determining a calibration item of the current individual according to the relative position, the relative position change and the view shielding angle change of each highlighted neighbor of the current individual; determining a repulsion item and an attraction item of the current individual according to the relative position and the view shielding angle of each salient neighbor of the current individual; determining a speed decision equation of the current individual according to the self-driving item, the calibration item, the repulsion item and the attraction item; and determining the speed information of the current individual at the current moment according to the speed decision equation of the current individual.
And the driving module is used for updating the motion state of the current individual according to the speed information at the current moment to obtain the position information of the current individual at the next moment.
An electronic device comprising a memory storing a computer program and a processor implementing any of the above methods when the processor executes the computer program.
The unmanned cluster behavior control method and device based on visual perception and the electronic equipment comprise the following steps: the method comprises the steps that a current individual in an unmanned cluster acquires visual perception information in a visual field range, and the visual perception information is processed to obtain first-order visual information and second-order visual information; the method comprises the steps of taking first-order visual information and second-order visual information as input, designing a speed decision equation of a current individual by considering a self-driving item, a calibration item, a repulsion item and an attraction item of the individual, and controlling the current individual according to the decision equation. In the group constructed by the model design, the individual visually observes external information, the order of the group is realized through a calibration item input by second-order visual information, and the collision avoidance and the aggregation of the group are realized through a repulsion item and an attraction item input by first-order visual information. The unmanned cluster constructed by the method does not need central control, and the individual makes a decision only by means of visual information, so that the unmanned cluster can cope with the rejection environment of communication interference which cannot be coped with by the existing cluster method.
Drawings
FIG. 1 is a schematic flow chart of a method for controlling behavior of an unmanned cluster based on visual perception in one embodiment;
FIG. 2 is a schematic diagram of another embodiment in 2-dimensional spaceA schematic of a moving circular individual;
FIG. 3 is a schematic diagram of an individual reference coordinate system in another embodiment;
FIG. 7 is another embodiment of a neighborRelative velocity parallel component ofA schematic representation of the generated second order visual information;
FIG. 8 shows neighbors of different relative positions in another embodimentRelative velocity parallel component ofThe resulting relative orientation change;
FIG. 9 is a diagram of different relative position neighbors in another embodimentRelative velocity parallel component ofThe resulting view occlusion angle changes;
FIG. 10 shows neighbors of different relative orientations in another embodimentRelative velocity parallel component ofThe generated second-order visual information;
FIG. 11 is another embodiment of a neighborRelative velocity vertical component ofA schematic of the generated second order visual information;
FIG. 12 shows neighbors of different relative positions in another embodimentRelative velocity vertical component ofThe resulting relative orientation change;
FIG. 13 shows neighbors of different relative positions in another embodimentRelative velocity vertical component ofThe resulting view occlusion angle changes;
FIG. 14 shows neighbors of different relative orientations in another embodimentRelative velocity vertical component ofThe generated second-order visual information;
FIG. 15 is a schematic diagram illustrating view blocking angles at different distances in another embodiment;
FIG. 16 is a graph of different distance rejection term factors in another embodiment;
FIG. 17 is a graph of different distance attraction term factors in another embodiment;
FIG. 18 is a schematic illustration of the effect of the repulsive and attractive terms on the directional change in velocity of an individual in another embodiment;
FIG. 19 is a schematic illustration of the effect of the repulsive and attractive terms on the magnitude of the change in velocity of an individual in another embodiment;
FIG. 20 is a block diagram of an embodiment of an unmanned cluster behavior control device based on visual perception;
FIG. 21 is a diagram illustrating the internal architecture of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided a visual perception-based unmanned cluster behavior control method, comprising the steps of:
step 100: and acquiring visual image information of other individuals in the 360-degree visual field range of the current individual in the unmanned cluster under the individual reference coordinate system.
The individual reference coordinate system is the reference direction of the current individual with the speed direction of the current individual () And constructing by taking the counterclockwise direction as the positive direction of the relative direction.
Specifically, the current individual uses the camera to acquire visual image information of other individuals within a 360-degree visual field range, and the visual image information is real-time visual field information intercepted by the camera.
The unmanned cluster may be a group composed of a plurality of unmanned aerial vehicles, or a group composed of a plurality of task robots, which is not specifically limited herein, and taking a task robot group as an example, the task robots include a current task robot and other task robots within a field of view of the current task robot; and the current task robot performs behavior control according to the obtained visual image information of other individuals in the 360-degree visual field range under the individual reference coordinate system. The task robot can be provided with a plurality of cameras for acquiring environment image information, and the cameras can be turned in each degree of freedom, so that the surrounding environment information can be acquired, and in addition, the cameras can be in a plurality and are respectively responsible for determining the visual information in the visual field range, and the visual image information in the visual field range of 360 degrees is acquired through splicing, so that various modes for acquiring the visual image information of other individuals in the visual field range exist, and the embodiment does not specifically limit the step of acquiring the visual image information.
This embodiment considers the location in 2-dimensional spaceDiameter ofIs/are as followsA moving circular individual, a current individual velocity vectorCan be decomposed into velocity magnitudesAnd direction of speedAs shown in fig. 2.
In order to simulate the acquisition mode of the biological individual to the external information, the speed direction is constructedIs an individualReference direction (a)) Positive counter-clockwise direction () The individual reference frame is shown in fig. 3, d FB Indicating that the front-to-back distance is forward, d LR Indicating that the left-right distance is positive to the right. Under this coordinate system, define:
Step 102: and distinguishing different individuals according to the shielding relation among individuals in the visual image information, and determining the visual perception function of the current individual.
Specifically, as shown in fig. 4, under the individual reference coordinate system, the current individual isiAccording to the counterclockwise direction as the positive direction () Obtaining the surrounding area centered around itself by adopting a visual perception modeVisual perception function representing occlusion or non-occlusion within range. As shown in fig. 5, individuals employ a higher order view of the living being, in which the individuals differentiate between individuals taking into account occlusion between the individuals.
Step 104: obtaining first-order visual information of the visual neighbors according to the visual perception function; the first order visual information includes: view occlusion angle and relative orientation.
Step 106: and according to the relative orientation sequence of the visual neighbors in the visual perception function, taking the visual neighbor set with the local maximum view shielding angle in the current individual view as a salient neighbor set.
In particular, the current individual is inspired by the attention mechanism of the biological individualiOnly focus on view blocking angle in visible neighborhoodThe local largest neighbor. According toThe current individual of the visual perception function obtained in the process of time observationObtaining a sequence of view occlusion angles in order of relative orientation of individuals in the viewAnd identity sequence(as shown in fig. 6).
Step 108: and differentiating the first-order visual information of each salient neighbor of the current individual to obtain second-order visual information of each salient neighbor, wherein the second-order visual information comprises relative orientation change and view shielding angle change.
Specifically, the second-order visual information is the change of the view shielding angle and the change of the relative orientation, and reflects the relative position change between individuals.
Step 110: and determining the self-driving item of the current individual according to the speed of the current individual and the expected speed.
In particular, the current individual's self-driven term is used to size the current individual at a desired speedAnd (4) moving.
Step 112: and determining the calibration item of the current individual according to the relative position, the relative position change and the view shielding angle change of each salient neighbor of the current individual.
Specifically, the calibration terms are designed by eliminating second-order visual information, and the ordering of the population is realized through the calibration terms input by the second-order visual information.
The elimination of second-order visual information means that the relative position between individuals remains unchanged, i.e. agreement in the direction of motion is achieved between individuals.
Step 114: and determining the repulsive item and the attractive item of the current individual according to the relative position and the view shielding angle of each salient neighbor of the current individual.
Specifically, the collision avoidance and the aggregation of individuals in the unmanned aerial vehicle cluster are realized according to the repulsion item and the attraction item which are designed according to the first-order visual information.
Step 116: and determining a speed decision equation of the current individual according to the self-driving term, the calibration term, the repulsion term and the attraction term.
In particular, with second order visual information (view occlusion angle variation)And relative orientation change) And first order visual information (view occlusion angle)And relative orientation) As input, consider individual self-driven itemsCalibration termThe term of exclusionAnd attraction itemsFour response modes, a speed decision equation of the current individual is defined as:
step 118: and determining the speed information of the current individual at the current moment according to the speed decision equation of the current individual.
Specifically, in the decision making process, the current individual makes a decision on the speed and the speed direction according to the extracted visual information.
Step 120: and updating the motion state of the current individual according to the speed information at the current moment to obtain the position information of the current individual at the next moment.
Specifically, according to the speed information of the current individual at the current moment, the motion equation of the current individual is obtained as follows:
wherein,is the current individualIs changed in the position of the movable body,is the current individualAt the current momenttThe speed of (d);
and updating the motion state of the current individual according to the motion equation of the current individual to obtain the position information of the current individual at the next moment.
In the above method for controlling behavior of unmanned clusters based on visual perception, the method includes: the method comprises the steps that a current individual in an unmanned cluster acquires visual perception information in a visual field range, and the visual perception information is processed to obtain first-order visual information and second-order visual information; the method comprises the steps of taking first-order visual information and second-order visual information as input, designing a speed decision equation of a current individual by considering a self-driving item, a calibration item, a repulsion item and an attraction item of the individual, and controlling the current individual according to the decision equation. In the group constructed by the model design, the individual visually observes external information, the order of the group is realized by the calibration item input by the second-order visual information, and the collision avoidance and aggregation of the group are realized by the repulsion item and the attraction item input by the first-order visual information. The unmanned cluster constructed by the method does not need central control, and the individual makes a decision only by means of visual information, so that the unmanned cluster can cope with the rejection environment of communication interference which cannot be coped with by the existing cluster method.
In one embodiment, step 102 comprises: and distinguishing different individuals according to the shielding relation in the visual image information, setting the shielding position to be 1 and the non-shielding position to be 0, and obtaining the visual perception function of the current individual.
In one embodiment, step 106 includes: according to the visual perception function, the current individual obtains a view shielding angle sequence and an identity sequence according to the sequence of the relative positions of other individuals in the view. Specifically, the relative orientation order in the visual field isOf (2)Its view shielding angleAnd identity sequence. For the continuous azimuth interval which is not shielded in the visual field, the visual field is shielded by the angleAnd identity sequence. Defining individuals by the above two sequencesView shielding angle in viewLocal maximum neighbor set, i.e. salient neighbor set。
In one embodiment, the second order visual information includes view occlusion angle changes and relative orientation changes; step 108 comprises: for is toTime of day anddifferentiating the first-order visual information of the common salient neighbors of the current individual at the moment to obtain second-order visual information:
wherein,is a neighborjIn the current individualiThe angle of occlusion in the field of view varies,is a neighborjIn the current individualiThe relative orientation in the field of view varies,is a neighborjIn the current individualiThe angle of the occlusion in the field of view,is a neighborjIn the current individualiThe relative orientation in the field of view,in order to be a differential time interval,tis time.
In particular, the current individualiBy pairsTime of day andall being prominent neighborsOne neighborThe difference of the front and back moments of the first-order visual information obtains the second-order visual information (the change of the view shielding angle)And relative orientation change)。
In one embodiment, step 110 comprises: according to the speed of the current individual and the expected speed, determining that the self-driving item of the current individual is as follows:
wherein,in order to be a self-driven item,in order to be a self-driving constant,in order to be of the desired magnitude of the speed,is the current individualAt the current timeThe velocity of (2).
Self-driving of linear function representationMake the individual in the desired speed sizeAnd (4) moving.
In one embodiment, step 112 includes: determining the calibration item of the current individual as follows according to the relative orientation, the relative orientation change and the view shielding angle change of each salient neighbor of the current individual:
wherein,in order to be a calibration term for the calibration,is a neighborjIn the current individualiThe relative orientation in the field of view,is a neighborjIn the current individualiThe angle of occlusion in the field of view varies,is a neighborjIn the current individualiThe relative orientation in the field of view varies,is composed ofTime of day andall times are current individualsiThe total number of neighbors is highlighted,、、andis a constant.
In particular, the current individualWith second-order visual information (And) Calibration terms as inputsAnd (4) taking the analysis result of the crowd data as a heuristic to design. With the current individualAs an observer, his visible neighborsRelative velocity ofCan be decomposed into parallel to the speedComponent (b) ofAnd perpendicular to the velocityComponent (b) of(as shown in fig. 3). Current individualIf and neighborsThe consensus on the magnitude of the velocities is achieved based on the parallel componentThe generated second-order visual information is subjected to speed variation according to the parallel componentThe generated second order visual information makes a velocity direction change. Neighbors of different relative positionsParallel component ofThe generated second-order visual information is shown in fig. 7-10, wherein fig. 7 is a neighborRelative velocity parallel component ofThe resulting second order visual information, FIG. 8, is the different relative position neighborsRelative velocity parallel component ofThe resulting relative orientation change, FIG. 9 is the neighbor of different relative positionsRelative velocity parallel component ofThe resulting view occlusion angle changes, FIG. 10 for neighbors of different relative orientationsRelative velocity parallel component ofThe generated second order visual information.Can be based on relative positionDecomposed into tangential and normal components, which respectively yield the current individualSee neighborRelative orientation changes and view occlusion changes. When the individual isRelative to the current individualIn (b) direction ofIs shown on the left side of) Or to the right () The tangential component is the largest and the corresponding resulting relative orientation change is the largest, thus the current individual is subject to a change in velocity magnitudeTo be provided withWeighting the relative orientation changes; when in a relative orientationIs a front side () Or a rear side () The normal component is the largest and the corresponding resulting change in view occlusion is the largest, so the current individual is presented with a change in velocity magnitudeTo be provided withThe view occlusion changes are weighted. In addition, toAnd withThe same direction is taken as an example. When the neighbor is on the right sideWhen the temperature of the water is higher than the set temperature,(ii) a And when the neighbor is on the leftTime of flight(ii) a When the neighbor is located at the front sideWhen the temperature of the water is higher than the set temperature,when the neighbor is located at the rear sideTime of flightCurrent individualShould be accelerated to eliminateAndtherefore, the corresponding coefficients are all designed to be less than 0. Thus, the current individualTo be provided withSecond order visual information generated in response to parallel components, thereby relating to neighborsA consensus on the speed magnitude is achieved. Like the parallel component, neighbors of different relative orientationsPerpendicular component ofThe generated second-order visual information is shown in FIGS. 11-14, in which FIG. 11 is a neighborRelative velocity vertical component ofGenerated second order visual information, FIG. 12 is the different relative position neighborsRelative velocity vertical component ofThe resulting relative orientation change, FIG. 13 is the different relative position neighborsRelative velocity vertical component ofResulting in a change in the view occlusion angle, FIG. 14 is the neighborhood of different relative orientationsRelative velocity vertical component ofThe resulting second order visual information. When in relative orientationIs a front side () Or a rear side () The tangential component is the largest and the corresponding resulting relative orientation change is the largest, so the current individual is changing in the direction of velocityTo be provided withWeighting the relative orientation changes; when it is in a relative orientationIs shown on the left side of) Or to the right () The normal component is the largest and the corresponding resulting change in view occlusion is the largest, so the current individual changes in the velocity directionTo be provided withThe view occlusion changes are weighted. In addition, toTo the left as an example. When the neighbor is located on the right sideWhen the utility model is used, the water is discharged,(ii) a When the neighbor is on the leftTime-piece(ii) a When the neighbor is located at the front sideWhen the temperature of the water is higher than the set temperature,when the neighbor is located at the rear sideTime-pieceCurrent individualShould be turned counterclockwise to eliminateAndthus design according toCoefficient of term correspondence is less than 0, designThe term corresponds to a coefficient greater than 0. Thus, the current individualTo be provided withSecond order visual information generated in response to parallel components, thereby relating to neighborsConsensus in the speed direction is achieved.
In one embodiment, step 114 comprises: determining the exclusion term of the current individual as follows according to the relative orientation and the view shielding angle of each salient neighbor of the current individual:
wherein,in the case of the exclusive item or items,is a neighborjIn the current individualiThe relative orientation in the field of view,is a neighborjIn the current individualiThe angle of the occlusion in the field of view,、is a constant number of times, and is,is composed ofIndividual at the current momentiThe total number of neighbors is highlighted.
According to the relative position and the view shielding angle of each salient neighbor of the current individual, determining that the attraction item of the current individual is as follows:
wherein:in order to attract the items,is a neighborjIn the current individualiThe relative orientation in the field of view,is a neighborjIn the current individualiThe angle of the occlusion in the field of view,、is a constant number of times, and is,is composed ofIndividual at presentiThe total number of neighbors is highlighted.
In particular, exclusive itemsAnd attraction itemsWith first-order visual information (And) As an input. By relative orientation vectorIn thatAndthe projections in the direction respectively formAndby a factor of (a). In this way, projection is performedAndrespectively characterizing pairs of relative orientationsAndthe influence of (c). The closer the distance between individuals, the wider the angle of the shielded region of the field of view, theThe greater the demand (as shown in fig. 15 and 16). Thus, willIs arranged asOne of which. On the contrary, by mixingIs arranged asTo achieve a smaller angle of the occlusion region of the field of view the farther the distance between the individuals is, theThe greater the demand (see fig. 15 and 17)Shown). Thus, factorAndfor characterizing pairs of relative distancesAndthe influence of (c). For greater clarity, neighbors of different relative positionsTo pairThe effect of (c) is shown in fig. 18-19. Neighbors in front (or behind)With the current individualWhen the distance is small, it generates repulsive force to make it present to the individualDeceleration (or acceleration). Conversely, the neighbors located at the front (rear)With individualsAt greater distances, it creates an attractive force to present it to the individualAcceleration (or deceleration). Likewise, on the left (or right) sideNeighbor(s)With the current individualWhen the distance is small, it generates repulsive force to make it present to the individualTurning clockwise (or counterclockwise). In contrast, neighbors located to the left (or right)With the current individualAt greater distances, it creates an attractive force to present it to the individualTurning counterclockwise (or clockwise).
In one embodiment, in the field of mechanics, the instantaneous change in velocity of an object is determined by the tangential force, while the instantaneous change in direction of the object is determined by the normal force. The current individual speed decision equation includes: a current individual speed magnitude decision equation and a direction change decision equation; step 116 includes: projecting the self-driven item, the calibration item, the repulsion item and the attraction item to the direction parallel to the current individual speed direction and the direction vertical to the current individual speed direction to obtain a speed size decision equation and a direction change decision equation of the current individual; the current individual speed size decision equation and direction change decision equation are respectively as follows:
wherein,is the current individualiAt the current timetThe size of the speed of the motor is changed,is the current individualiAt the current timetIs detected by the speed of the motor, and,in order to be a self-driven item,in order to be a calibration term, the calibration term,in the interest of the term being exclusive,in order to attract the items,is a neighborjIn the current individualiThe angle of the occlusion in the field of view,is a neighborjIn the current individualiThe relative orientation in the field of view,is a neighborjIn the current individualiThe angle of occlusion in the field of view varies,is a neighborjIn the current individualiThe relative orientation in the field of view changes,andparallel and perpendicular to the current individual, respectivelyAt the current timeSpeed ofThe unit vector of (2).
Substituting the expressions (4) to (7) into the expressions (8) and (9) to obtain a speed magnitude decision equation and a direction change decision equation of the current individual, wherein the speed magnitude decision equation and the direction change decision equation are respectively as follows:
wherein,is the current individualAt the current timeThe magnitude of the velocity of (a) is,in order to be a self-driving constant,in order to be of the desired magnitude of the speed,is composed ofTime of day andall times are current individualsiThe total number of neighbors is highlighted,is composed ofIndividual at presentiHighlighting the total number of neighbors.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 20, there is provided a visual perception-based unmanned cluster behavior control apparatus, including: the device comprises a visual image acquisition module, a first-order visual information determination module, a second-order visual information determination module, a decision module and a driving module, wherein:
the visual image acquisition module is used for acquiring visual image information of other individuals in the 360-degree view range of the current individual in the unmanned cluster under the individual reference coordinate system; the individual reference coordinate system is constructed by taking the speed direction of the current individual as the reference direction of the current individual and taking the anticlockwise direction as the positive direction of the relative direction.
The first-order visual information determining module is used for distinguishing different individuals according to the shielding relation among the individuals in the visual image information and determining the visual perception function of the current individual; obtaining first-order visual information of the visual neighbors according to the visual perception function; the first order visual information includes: view obscuration angles and relative orientations.
The second-order visual information determining module is used for taking a visual neighbor set with a local maximum view shielding angle in the current individual view as a highlight neighbor set according to the relative orientation sequence of the visual neighbors in the visual perception function; and differentiating the first-order visual information of each salient neighbor of the current individual to obtain second-order visual information of each salient neighbor, wherein the second-order visual information comprises relative orientation change and view shielding angle change.
The decision module is used for determining the self-driven item of the current individual according to the speed and the expected speed of the current individual; determining a calibration item of the current individual according to the relative orientation, the relative orientation change and the view shielding angle change of each salient neighbor of the current individual; determining a repulsion item and an attraction item of the current individual according to the relative position and the view shielding angle of each prominent neighbor of the current individual; determining a speed decision equation of the current individual according to the self-driving term, the calibration term, the repulsion term and the attraction term; and determining the speed information of the current individual at the current moment according to the speed decision equation of the current individual.
And the driving module is used for updating the motion state of the current individual according to the speed information at the current moment to obtain the position information of the current individual at the next moment.
In one embodiment, the first-order visual information determining module is further configured to distinguish different individuals according to the occlusion relationship in the visual image information, set an occlusion position to be 1, and set an non-occlusion position to be 0, so as to obtain the visual perception function of the current individual.
In one embodiment, the second-order visual information determining module is further configured to obtain, according to the visual perception function, a view occlusion angle sequence and an identity sequence of the current individual according to the order of relative positions of other individuals in the view; wherein for the current individual viewThe relative orientation sequence in the field isOf (2)Of the sequence of view occlusion anglesAnd element values of identity sequences(ii) a For continuous azimuth interval which is not occluded in the current individual view field, the values of elements of view occlusion angle sequenceAnd element values of identity sequences(ii) a And taking the visible neighbor set with the local maximum view shielding angle in the current individual view as the highlighted neighbor set.
In one embodiment, the second order visual information includes a view occlusion angle change and a relative orientation change; second order visual information determination module, and is also used for comparingTime of day andthe current individual at the moment highlights the first-order visual information of the neighbors together for difference, and the expression of the obtained second-order visual information is shown as a formula (3).
In one embodiment, the decision module is further configured to determine the self-driving term of the current individual according to the speed of the current individual and the desired speed, as shown in equation (4).
In one embodiment, the decision module is further configured to determine a calibration term of the current individual according to the relative orientation, the relative orientation change, and the view occlusion angle change of each salient neighbor of the current individual, as shown in equation (5).
In one embodiment, the decision module is further configured to determine, according to the relative orientation and the view occlusion angle of each salient neighbor of the current individual, an exclusion term of the current individual as shown in equation (6); and determining the attraction term of the current individual as shown in the formula (7) according to the relative position and the view shielding angle of each salient neighbor of the current individual.
In one embodiment, the current individual velocity decision equation comprises: a current individual speed magnitude decision equation and a direction change decision equation; the decision module is also used for projecting the self-driven item, the calibration item, the repulsion item and the attraction item to the direction parallel to the current individual speed direction and the direction vertical to the current individual speed direction to obtain a speed size decision equation and a direction change decision equation of the current individual; the speed size decision equation and the direction change decision equation of the current individual are respectively shown as a formula (10) and a formula (11).
For specific limitations of the unmanned collective behavior control device based on visual perception, reference may be made to the above limitations of the unmanned collective behavior control method based on visual perception, and details are not repeated here. The modules in the above-mentioned unmanned clustered behavior control device based on visual perception may be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, an electronic device is provided, which may be a terminal, and an internal structure thereof may be as shown in fig. 21. The electronic device comprises a processor, a memory, a network interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the electronic device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a method for controlling behavior of an unmanned cluster based on visual perception. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configuration shown in fig. 21 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, an electronic device is provided, comprising a memory storing a computer program and a processor, which when executing the computer program, performs the steps of the method of the above-mentioned method embodiments.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.
Claims (10)
1. A method for unmanned cluster behavior control based on visual perception, the method comprising:
acquiring visual image information of other individuals in the 360-degree vision range of the current individual in the unmanned cluster under an individual reference coordinate system; the individual reference coordinate system is constructed by taking the speed direction of the current individual as the reference direction of the current individual and taking the anticlockwise direction as the positive direction of the relative direction;
according to the shielding relation among individuals in the visual image information, distinguishing different individuals and determining the visual perception function of the current individual;
obtaining first-order visual information of the visual neighbors according to the visual perception function; the first order visual information includes: view occlusion angle and relative orientation;
according to the relative orientation sequence of the visual neighbors in the visual perception function, taking a visual neighbor set with a locally maximum view shielding angle in the current individual view as a salient neighbor set;
differentiating the first-order visual information of each salient neighbor of the current individual to obtain second-order visual information of each salient neighbor, wherein the second-order visual information comprises relative orientation change and view shielding angle change;
determining the self-driving item of the current individual according to the speed and the expected speed of the current individual;
determining a calibration item of the current individual according to the relative orientation, the relative orientation change and the view shielding angle change of each salient neighbor of the current individual;
determining a repulsion item and an attraction item of the current individual according to the relative position and the view shielding angle of each salient neighbor of the current individual;
determining a speed decision equation of the current individual according to the self-driving term, the calibration term, the repulsion term and the attraction term;
determining the speed information of the current individual at the current moment according to the speed decision equation of the current individual;
and updating the motion state of the current individual according to the speed information at the current moment to obtain the position information of the current individual at the next moment.
2. The method according to claim 1, wherein determining the visual perception function of the current individual according to the occlusion relationship between individuals in the visual image information and distinguishing between different individuals comprises:
and distinguishing different individuals according to the shielding relation in the visual image information, setting the shielding position as 1 and the non-shielding position as 0, and obtaining the visual perception function of the current individual.
3. The method according to claim 1, wherein the step of regarding the set of visible neighbors with the locally largest view occlusion angle in the current individual view as the set of prominent neighbors according to the relative orientation order of the visible neighbors in the visual perception function comprises:
according to the visual perception function, the current individual obtains a view shielding angle sequence and an identity sequence according to the sequence of the opposite positions of other individuals in the view; wherein, the relative orientation sequence in the current individual visual field isOf (2)Element values of the sequence of view occlusion anglesAnd element values of identity sequences,iIs the serial number of the current individual,tas a matter of time, the time is,is a neighborjIn the current individualiOcclusion angle in the field of view; for continuous azimuth intervals which are not occluded in the current individual view, the elements of the view occlusion angle sequenceValue ofAnd the value of an element of the identity sequence;
And taking the visible neighbor set with the local maximum view shielding angle in the current individual view as the highlighted neighbor set.
4. The method of claim 1, wherein differentiating the first order visual information of each prominent neighbor of the current individual to obtain second order visual information comprises:
to pairTime of day anddifferentiating the first-order visual information of the common salient neighbors of the current individual at the moment to obtain second-order visual information:
wherein,is a neighborjIn the current individualiThe angle of occlusion in the field of view varies,is a neighborjIn the current individualiThe relative orientation in the field of view varies,is a neighborjIn the current individualiThe angle of the occlusion in the field of view,is a neighborjIn the current individualiThe relative orientation in the field of view,in order to differentiate the time intervals between the two,tis time.
5. The method of claim 1, wherein determining the self-propelled term for the current individual based on the speed of the current individual and the desired speed comprises:
according to the speed of the current individual and the expected speed, determining that the self-driving item of the current individual is as follows:
6. The method of claim 1, wherein determining the calibration term for the current individual based on the relative orientation, the change in relative orientation, and the change in view occlusion angle for each salient neighbor of the current individual comprises:
determining the calibration item of the current individual as follows according to the relative orientation, the relative orientation change and the view shielding angle change of each salient neighbor of the current individual:
wherein,in order to be a calibration term for the calibration,is a neighborjIn the current individualiThe relative orientation in the field of view,is a neighborjIn the current individualiThe angle of occlusion in the field of view varies,is a neighborjIn the current individualiThe relative orientation in the field of view varies,is composed ofTime of day andall the time are current individualsiThe total number of neighbors is highlighted,、、andis a constant.
7. The method of claim 1, wherein determining the repulsive term and attractive term for the current individual based on the relative orientation and view occlusion angle of each prominent neighbor of the current individual comprises:
determining the exclusion term of the current individual as follows according to the relative orientation and the view shielding angle of each salient neighbor of the current individual:
wherein,in the case of the exclusive item or items,is a neighborjIn the current individualiThe relative orientation in the field of view,is a neighborjIn the current individualiThe angle of the occlusion in the field of view,、is a constant number of times, and is,is composed ofIndividual at the current momentiHighlighting the total number of neighbors;
according to the relative position and the view shielding angle of each salient neighbor of the current individual, determining that the attraction item of the current individual is as follows:
wherein:in order to attract the terms to the user,is a neighborjIn the current individualiThe relative orientation in the field of view,is a neighborjIn the current individualiThe angle of the occlusion in the field of view,、is a constant number of times, and is,is composed ofIndividual at presentiHighlighting the total number of neighbors.
8. The method of claim 1, wherein the current individual velocity decision equation comprises: a current individual speed magnitude decision equation and a direction change decision equation;
determining a speed decision equation for the current individual based on the self-driving term, the calibration term, the repulsion term, and the attraction term, including:
projecting the self-driving item, the calibration item, the repulsion item and the attraction item to a direction parallel to the current individual speed direction and a direction perpendicular to the current individual speed direction to obtain a speed size decision equation and a direction change decision equation of the current individual; the current individual speed size decision equation and direction change decision equation are respectively as follows:
wherein,is the current individualiAt the current timetThe magnitude of the speed of the motor is varied,is the current individualiAt the current timetIs detected by the speed of the motor, and,in order to be a self-driven item,in order to be a calibration term, the calibration term,in the case of the exclusive item or items,in order to attract the items,is a neighborjIn the current individualiThe angle of the occlusion in the field of view,is a neighborjIn the current individualiThe relative orientation in the field of view,is a neighborjIn the current individualiThe angle of occlusion in the field of view varies,is a neighborjIn the current individualiThe relative orientation in the field of view changes,andparallel and perpendicular to the current individual, respectivelyAt the current timeSpeed of (2)The unit vector of (2).
9. An unmanned cluster behavior control device based on visual perception, the device comprising:
the visual image acquisition module is used for acquiring visual image information of other individuals in the 360-degree vision field range of the current individual in the unmanned cluster under the individual reference coordinate system; the individual reference coordinate system is constructed by taking the speed direction of the current individual as the reference direction of the current individual and taking the anticlockwise direction as the positive direction of the relative direction;
the first-order visual information determining module is used for distinguishing different individuals according to the shielding relation among the individuals in the visual image information and determining the visual perception function of the current individual; obtaining first-order visual information of the visual neighbors according to the visual perception function; the first order visual information includes: view occlusion angle and relative orientation;
the second-order visual information determining module is used for taking a visual neighbor set with a locally maximum visual field shielding angle in the current individual visual field as a salient neighbor set according to the relative azimuth sequence of the visual neighbors in the visual perception function; differentiating the first-order visual information of each salient neighbor of the current individual to obtain second-order visual information of each salient neighbor, wherein the second-order visual information comprises relative orientation change and view shielding angle change;
the decision module is used for determining the self-driven item of the current individual according to the speed and the expected speed of the current individual; determining a calibration item of the current individual according to the relative orientation, the relative orientation change and the view shielding angle change of each salient neighbor of the current individual; determining a repulsion item and an attraction item of the current individual according to the relative position and the view shielding angle of each salient neighbor of the current individual; determining a speed decision equation of the current individual according to the self-driving term, the calibration term, the repulsion term and the attraction term; determining the speed information of the current individual at the current moment according to the speed decision equation of the current individual;
and the driving module is used for updating the motion state of the current individual according to the speed information at the current moment to obtain the position information of the current individual at the next moment.
10. An electronic device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the method of any one of claims 1 to 8 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211569284.7A CN115576359B (en) | 2022-12-08 | 2022-12-08 | Unmanned cluster behavior control method and device based on visual perception and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211569284.7A CN115576359B (en) | 2022-12-08 | 2022-12-08 | Unmanned cluster behavior control method and device based on visual perception and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115576359A CN115576359A (en) | 2023-01-06 |
CN115576359B true CN115576359B (en) | 2023-03-07 |
Family
ID=84590805
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211569284.7A Active CN115576359B (en) | 2022-12-08 | 2022-12-08 | Unmanned cluster behavior control method and device based on visual perception and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115576359B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110287829A (en) * | 2019-06-12 | 2019-09-27 | 河海大学 | A kind of video face identification method of combination depth Q study and attention model |
CN112001937A (en) * | 2020-09-07 | 2020-11-27 | 中国人民解放军国防科技大学 | Group chasing and escaping method and device based on field-of-view perception |
CN115202392A (en) * | 2022-07-11 | 2022-10-18 | 中国人民解放军国防科技大学 | Group adaptive behavior control method, device and equipment based on visual perception |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110043537A1 (en) * | 2009-08-20 | 2011-02-24 | University Of Washington | Visual distortion in a virtual environment to alter or guide path movement |
CN111515950B (en) * | 2020-04-28 | 2022-04-08 | 腾讯科技(深圳)有限公司 | Method, device and equipment for determining transformation relation of robot coordinate system and storage medium |
-
2022
- 2022-12-08 CN CN202211569284.7A patent/CN115576359B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110287829A (en) * | 2019-06-12 | 2019-09-27 | 河海大学 | A kind of video face identification method of combination depth Q study and attention model |
CN112001937A (en) * | 2020-09-07 | 2020-11-27 | 中国人民解放军国防科技大学 | Group chasing and escaping method and device based on field-of-view perception |
CN115202392A (en) * | 2022-07-11 | 2022-10-18 | 中国人民解放军国防科技大学 | Group adaptive behavior control method, device and equipment based on visual perception |
Non-Patent Citations (3)
Title |
---|
Behavioral and neural markers of visual configural processing in social scene perception;Etienne AbassiLiuba Papeo;《NeuroImage》;全文 * |
Jingtao Qi ; Liang Bai ; Yandong Xiao ; Wansen Wu ; Lu Liu.Group Chase and Escape of Biological Groups Based on a Visual Perception-Decision-Propulsion Model.《IEEE Access》.2020, * |
The emergence of collective obstacle avoidance based on a visual perception mechanism;Jingtao Qi,Liang Bai,Yandong Xiao,Yingmei Wei,Wansen Wu;《Information Sciences》;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115576359A (en) | 2023-01-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3674852B1 (en) | Method and apparatus with gaze estimation | |
CN106687886B (en) | Three-dimensional hybrid reality viewport | |
CN111819568B (en) | Face rotation image generation method and device | |
JP7562504B2 (en) | Deep Predictor Recurrent Neural Networks for Head Pose Prediction | |
Simoncelli | Distributed representation and analysis of visual motion | |
Ong et al. | A Bayesian filter for multi-view 3D multi-object tracking with occlusion handling | |
Kothari et al. | Weakly-supervised physically unconstrained gaze estimation | |
Sudderth et al. | Nonparametric belief propagation | |
WO2018155897A1 (en) | Screen control method and device for virtual reality service | |
EP2614413B1 (en) | Method and apparatus for object tracking and recognition | |
EP2405393B1 (en) | Device, method and program for creating information for object position estimation | |
JP2015133691A (en) | Imaging apparatus, image processing system, imaging method and recording medium | |
KR20200079170A (en) | Gaze estimation method and gaze estimation apparatus | |
CN106662924A (en) | Mouse sharing between a desktop and a virtual world | |
CN112381707B (en) | Image generation method, device, equipment and storage medium | |
Morerio et al. | Hand detection in first person vision | |
Zeng et al. | Pixel modeling using histograms based on fuzzy partitions for dynamic background subtraction | |
US20230005220A1 (en) | Shooting method, shooting instruction method, shooting device, and shooting instruction device | |
Marcus et al. | An eye on visual sensor networks | |
Natarajan et al. | Scalable decision-theoretic coordination and control for real-time active multi-camera surveillance | |
CN115576359B (en) | Unmanned cluster behavior control method and device based on visual perception and electronic equipment | |
CN106127119B (en) | Joint probabilistic data association method based on color image and depth image multiple features | |
Shi et al. | I understand you: Blind 3d human attention inference from the perspective of third-person | |
Rossi et al. | A new challenge: Behavioural analysis of 6-DoF user when consuming immersive media | |
EP3529776B1 (en) | Method, device, and system for processing multimedia signal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |