CN112489138B - Target situation information intelligent acquisition system based on wearable equipment - Google Patents

Target situation information intelligent acquisition system based on wearable equipment Download PDF

Info

Publication number
CN112489138B
CN112489138B CN202011389561.7A CN202011389561A CN112489138B CN 112489138 B CN112489138 B CN 112489138B CN 202011389561 A CN202011389561 A CN 202011389561A CN 112489138 B CN112489138 B CN 112489138B
Authority
CN
China
Prior art keywords
target
module
information
observer
relative distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011389561.7A
Other languages
Chinese (zh)
Other versions
CN112489138A (en
Inventor
倪勇
卢凯良
刘扬
刘学
陈彦璋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
716th Research Institute of CSIC
Original Assignee
716th Research Institute of CSIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 716th Research Institute of CSIC filed Critical 716th Research Institute of CSIC
Priority to CN202011389561.7A priority Critical patent/CN112489138B/en
Publication of CN112489138A publication Critical patent/CN112489138A/en
Application granted granted Critical
Publication of CN112489138B publication Critical patent/CN112489138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a wearable equipment-based target situation information intelligent acquisition system, which comprises a target situation information intelligent acquisition module A1 and a wearable equipment basic support module A2, wherein the target situation information intelligent acquisition module A1 consists of an information acquisition sensing module M1, an information calculation processing module M2 and an information fusion display module M3, and a positioning frame, a relative distance, a category and other situation information penetrating type virtual-real fusion presentation of visible targets in the field of view of an observer is realized in real time by utilizing a target detection and identification method P1 based on a lightweight architecture, an adaptive calculation calibration method P2 of target relative distance and an invisible target relative distance and azimuth angle calculation method P3, and the situation information such as the relative azimuth and the distance of the visible targets is marked in an auxiliary presentation in a superposition mode. The invention can help the observer to acquire more accurate target situation information, lighten cognitive load, liberate hands and reduce action links, and assist the observer to quickly judge and quickly decide in a complex scene.

Description

Target situation information intelligent acquisition system based on wearable equipment
Technical Field
The invention relates to wearable equipment and an information intelligent processing system, in particular to a target situation information intelligent acquisition system based on the wearable equipment.
Background
The acquisition, perception, acquisition and presentation of situation information such as accurate distance, azimuth and multi-target distribution of targets relative to observers have wide application requirements in various scenes. For example, cabins, mines, and the like have personnel distribution situations in a blocking space that blocks vision and wide area communication signals. At present, target situation information is mainly obtained by means of wearable equipment such as handheld terminals or intelligent helmets provided with VR/AR glasses and an intelligent information processing method.
The GPS/Beidou signal-based handheld terminal target positioning and displaying technology can intuitively, clearly and real-timely display the geographic position of a target and the relative distance and azimuth between an observer and the target, but on one hand, a handheld terminal user needs to depend on one hand or two hands when acquiring the situation information of the target so as to inevitably influence the actions and operations on the hand; on the other hand, the existing handheld terminal target positioning and display technology does not integrate an AR module to realize the virtual-real fusion function.
The virtual-real fusion function is usually realized through an intelligent helmet provided with AR glasses at present, and the technical scheme can acquire the state information of an observed object by means of wearable equipment only by relying on the natural actions of the head and neck of an observer without occupying two hands. In the prior art, virtual-real fusion is generally implemented by "embedding" a virtual object into an image instead of fusing the virtual object with the real world observed by human eyes in a penetrating display manner, and aiming at real-time and accurate sensing and penetrating virtual-real fusion presentation technology of target situation information (including azimuth, distance, category and the like), particularly, the situation information presentation technology of an invisible target for an observer is not reported yet.
Disclosure of Invention
The invention aims to provide a target situation information acquisition system based on wearable equipment and fused with a virtual-real fusion function.
The technical solution for realizing the purpose of the invention is as follows:
the intelligent target situation information acquisition system based on the wearable equipment comprises an intelligent target situation information acquisition module A1 and a basic wearable equipment support module A2, wherein the intelligent target situation information acquisition module A1 comprises an information acquisition sensing module M1, an information calculation processing module M2 and an information fusion display module M3, and is used for presenting a positioning frame, relative distance and category situation information of a visible target in a visual field for an observer in real time and presenting situation information superposition type labels of the relative positions and the distances of the visible target and an invisible target; the wearable equipment foundation support module A2 comprises a helmet body M4, an embedded computing unit M5, a wireless communication terminal M6 and a power supply M7, and provides structure, calculation power, communication and energy support for the target situation information intelligent acquisition module A1.
Compared with the prior art, the invention has the following advantages:
(1) The invention can help a user to acquire information of single or multiple targets in real time by means of the wearable head display without depending on the handheld terminal, thereby freeing hands and reducing action links;
(2) The invention can acquire situation information of visible and invisible targets in and out of the visual field of the observer, can acquire more accurate information to reduce cognitive load, and can assist a user to quickly judge and decide and quickly execute actions in a complex scene.
The invention is described in further detail below with reference to the accompanying drawings.
Drawings
Fig. 1 is a diagram of the overall architecture of the system of the present invention.
Fig. 2 is a flowchart of a target detection and recognition method P1 based on a lightweight integrated architecture in the present invention.
FIG. 3 is a flow chart of an adaptive calculation calibration method P2 for target relative distance in the present invention.
Fig. 4 is a schematic diagram of a method for calculating the distance and azimuth angle of an invisible target relative to an observer according to an embodiment of the present invention.
FIG. 5 is a schematic diagram showing the presentation of visible and invisible object situation information in an embodiment of the present invention.
Fig. 6 is a diagram showing the overall information presentation effect of the object situation of the present invention.
Detailed Description
The intelligent target situation information acquisition system based on the wearable equipment comprises an intelligent target situation information acquisition module A1 and a basic wearable equipment support module A2, wherein the intelligent target situation information acquisition module A1 comprises an information acquisition sensing module M1, an information calculation processing module M2 and an information fusion display module M3, and is used for presenting a positioning frame, relative distance and category situation information of a visible target in a visual field for an observer in real time and presenting situation information superposition type labels of the relative positions and the distances of the visible target and an invisible target; the wearable equipment foundation support module A2 comprises a helmet body M4, an embedded computing unit M5, a wireless communication terminal M6 and a power supply M7, and provides structure, calculation power, communication and energy support for the target situation information intelligent acquisition module A1.
The acquisition sensing module M1 in the target situation information intelligent acquisition module A1 comprises a visible light image sensing module M11, a geographic position signal acquisition module M12 and an azimuth signal acquisition module M13, and provides images, position coordinates and azimuth information for the information processing module M2; the visible light image sensing module M11 adopts a binocular camera, the geographic position signal acquisition module M12 acquires the position information of an observer and the position information of an invisible target formation, the azimuth signal acquisition module M13 adopts a gyroscope, and the visible light image sensing module M11, the geographic position signal acquisition module M12 and the azimuth signal acquisition module M13 are arranged on the helmet body M4.
The visible light image sensing module M11 is provided with an eye movement attention tracking device, and the method for acquiring the image of the target area by the device comprises the following steps:
firstly, identifying the pupil center and the cornea reflection center on an eye image, extracting the pupil and the cornea in the shot image, taking the cornea reflection center as a base point of the relative positions of an eye tracking camera and an eyeball, and enabling the pupil center position coordinate to represent the gaze point position;
then, determining the direction of the line of sight by using the relative positions of the light spot and the pupil; the method comprises the following steps: the relative position determines the gaze direction through a gaze mapping function model, which is as follows:
wherein,for gaze point coordinates>Reflecting the spot vector for the pupil;
and then, according to the sight line direction, obtaining the gaze point center of the observer by using the pupil cornea reflection technology, and performing region expansion by using the gaze point center to obtain the target region image.
The information calculation processing module M2 comprises an image preprocessing module M21, a target detection and identification module M22, a relative distance and absolute azimuth calculation module M23 and a relative azimuth calculation module M24;
the image preprocessing module M21 receives and processes the original image of the visible target in the visual field of the observer, which is output by the information acquisition and perception module M1, and inputs the optimized image after noise reduction and definition to the target detection and recognition module M22; the target detection and recognition module M22 utilizes a target detection and recognition method P1 based on a lightweight integrated architecture to calculate and output a target positioning frame and category information on the embedded computing unit M5 in real time;
the relative distance and absolute azimuth calculation module M23 and the relative azimuth calculation module M24 are configured to receive and process the position information of the observer, the position information of the invisible target object, and the helmet azimuth information of the observer, which are output by the information acquisition and sensing module M1.
The relative distance and absolute azimuth angle calculation module M23 calculates the relative distance and absolute azimuth angle between the wireless communication terminal M6 and the invisible target object according to the position information of the observer and the position information of the invisible target object acquired from other observers;
then, a relative azimuth angle calculation module M24 is utilized to calculate the distance and the relative azimuth angle between the observer and the invisible target by utilizing the relative distance and the azimuth angle of the invisible target and combining the helmet azimuth angle information of the observer, which is acquired by a gyroscope M13;
the method for calculating the relative distance and the azimuth angle of the invisible target P3 utilizes the relative distance and the absolute azimuth angle of the invisible target object obtained by the relative distance and absolute azimuth angle calculating module M23, combines the azimuth angle information of the gyroscope M13, and obtains the distance and the azimuth angle of the invisible target relative to an observer in real time through a coordinate calculating and converting quick algorithm.
The information fusion display module M3 comprises a visible target situation penetrating virtual-actual fusion display module M31, an invisible target situation overlapped marking display module M32 and an AR glasses module M33;
the visible target situation penetrating virtual-real fusion display module M31 receives situation information of a positioning frame, a category and a relative distance value of the visible target output by the fusion information calculation processing module M2, outputs the situation information to the AR glasses module M33 for penetrating virtual-real fusion presentation, and shares the situation information to the invisible target situation superposition labeling display module M32;
the invisible target situation superposition type labeling display module M32 receives the relative distance and azimuth information of the invisible target of the relative observer output by the information calculation processing module M2 and the visible target situation information output by the module M31, and outputs the information to the AR glasses module M33 to display the invisible target situation information in a superposition type labeling mode, so that a visible invisible target overall situation map is synthesized; the AR glasses module M33 adopts an optical waveguide scheme, so that the display effect of the overall situation map of the visible/invisible target and the sense of the environment of an observer are enhanced.
The visible target situation penetrating virtual-real fusion display module M31 comprises a target positioning frame and category display module M311 and a target relative observer distance display module M312; the target positioning frame and category display module M311 processes the data by the adaptive calculation calibration method P2 of the target relative distance, and outputs the corrected data to the target relative observer distance display module M312, so as to obtain the measured target relative distance value.
The target detection and recognition method P1 based on the lightweight integrated architecture extracts a neural network model integrated by a backbone network and a single-stage rapid target detection framework, and performs pre-training on a public data set, then performs migration learning to a specific target data set, performs fine tuning on the specific target data set, and can accurately recognize and position a target object in real time, and the method specifically comprises the following steps:
step P11: after obtaining a normalized sample after image preprocessing optimization, constructing a lightweight characteristic extraction deep learning network model, wherein the model comprises a lightweight depth separable main network mobilent, and the lightweight depth separable main network mobilent is constructed by using a depth separable convolution method; depth separable convolution methods include depthwise convolution and pointwise convolution; according to the depth separable convolution method, depthwise convolution is adopted to respectively convolve different input channels, pointwise convolution is adopted to combine convolved outputs, and a MobileNet neural network is adopted to combine the input images to obtain feature images with different scales;
step P12: densely sampling the feature map by adopting an SSD single-stage target detection neural network frame to convert the feature map into a plurality of candidate detection frames and category confidence degrees;
step P13: non-maximum suppression non-maximum suppression is performed on the objects in the prediction frame;
step P14: and outputting the positioning frame and the category information of the target.
The target detection and recognition method P1 based on the lightweight integrated architecture extracts lightweight characteristics, deeply learns a network model to integrate context information, combines a focal loss function and a migration learning method to perform model training, combines a target tracking model to obtain a small target detection and tracking model, and obtains new image information through the small target detection and tracking model by a small target detail image;
the fusion context information specifically comprises: adding an FPN context information feature fusion algorithm to a lightweight neural network target detection model, wherein the algorithm fuses feature images with different scales by adopting a top-down path, and semantically fuses the feature pyramid space of the upper layer to the feature pyramid bottom space of the lower layer;
the focal loss function formula is as follows:
wherein the focus parameter,/>Representing the real category->Is of the category ofyProbability values predicted by class time model, +.>Is of the category ofyA class-time weighting factor;
the target tracking model is constructed based on a Kalman filtering algorithm: firstly, establishing a state equation based on a Kalman filtering cooperative algorithm, then inputting a small target detail image as the state equation, and adjusting parameters of the state equation according to an output result of the state equation.
The adaptive calculation calibration method P2 for the target relative distance specifically includes the following steps:
step P21: obtaining a target relative distance value according to pixel weighted average in a target positioning frame according to the depth map information;
step P22: judging whether the target relative distance value is in the effective range, if so, outputting the current target relative distance value and executing the step P23, otherwise, executing the step P25;
step P23: selecting target objects of the same class to form a calibration reference set;
step P24: calculating statistically to obtain a functional relation between the height and the distance value of the target positioning frame, and executing the step P27;
step P25: judging whether the height pixel value of the target positioning frame is smaller than 32, if yes, executing a step P27, otherwise, executing a step P26;
step P26: the link does not output relative distance information, and the P2 algorithm is ended;
step P27: calibrating the distance value of the target exceeding the effective range by utilizing the functional relation obtained in the step P24;
step P28: and outputting the target relative distance value.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Examples
The invention relates to a target situation information intelligent acquisition system based on wearable equipment, which is shown in fig. 1, and comprises a target situation information intelligent acquisition module A1 and a wearable equipment basic support module A2, wherein the target situation information intelligent acquisition module A1 comprises an information acquisition sensing module M1, an information calculation processing module M2 and an information fusion display module M3, and is used for presenting a positioning frame, a relative distance and category situation information of a visible target in a visual field for an observer in real time and presenting the situation information superposition type labels of the relative azimuth and the distance of the visible target and an invisible target; the wearable equipment foundation support module A2 comprises a helmet body M4, an embedded computing unit M5, a wireless communication terminal M6 and a power supply M7, and provides structure, calculation power, communication and energy support functions for the target situation information intelligent acquisition module A1.
In conjunction with fig. 4, and 5, the scene of two observers A, B and three targets I, J, K, target I, J being in the field of view of observer B, target J, K being visible to observer a. The situation information of the visible target J, K and the invisible target I is acquired and presented by the technical scheme of the invention according to the user view angle of the observer a:
(1) Viewer a obtains situational information for visual target J, K
First, the binocular camera M11 on the smart helmet M4 worn by the observer a may acquire the original image information of the target J, K and obtain a pixel level depth map using a binocular vision algorithm. The original image is converted into a normalized image sample after optimization processing such as noise reduction and sharpness by the image preprocessing module M21, and is input into the target detection and recognition module M22, and the steps of the target detection and recognition method P1 based on the lightweight network architecture are as shown in fig. 2:
step P11: after obtaining a normalized sample after image preprocessing optimization, constructing a lightweight characteristic extraction deep learning network model, wherein the model comprises a lightweight depth separable backbone network mobilent; obtaining feature images with different scales by using a MobileNet neural network on an input image;
step P12: densely sampling the feature map by adopting an SSD single-stage target detection neural network frame to convert the feature map into a plurality of candidate detection frames and category confidence degrees;
step P13: non-maximum suppression non-maximum suppression is performed on the objects in the prediction frame;
step P14: and outputting the positioning frame and the category information of the target (J, K).
Then, in connection with fig. 3, a more accurate distance value of the target J, K with respect to the observer a is obtained by using the adaptive calculation calibration method P2 of the target relative distance, which includes the following steps:
step P21: firstly, obtaining respective distances in a positioning frame of a target J, K according to a depth map by pixel weighted average;
step P22: judging whether the target relative distance value is within the effective range, wherein the measured distance value of the target J is within the effective range of M11 in the example, the distance value is less than or equal to 50M in the example, and the target K exceeds the effective amount; outputting the relative distance value of the target J and performing step P23, while performing step P25 for the target K;
step P23: selecting target objects in the same class as the target J to form a calibration reference set;
step P24: counting the existing data by adopting the existing method, and calculating to obtain a functional relation between the height and the distance value of the target positioning frame;
step P25: judging whether the pixel value of the height of the target K positioning frame is smaller than 32, if yes, executing the step P27, otherwise executing the step P26, wherein the pixel value of the height of the target K positioning frame is smaller than 32 in the embodiment;
step P26: the link does not output relative distance information, and the P2 algorithm is ended;
p27, calibrating the distance value of the target K by using the functional relation obtained in the step P24;
step P28: and outputting the relative distance value of the target K.
(2) The observer A obtains situation information of the invisible target I
As shown in fig. 4, the coordinates (XA, YA) of the observer a in the world coordinate system X-Y may be obtained by the geographic position signal acquiring module M12, the coordinates (XI, YI) of the target I invisible thereto in the world coordinate system X-Y may be obtained via communication at the observer B, and the distance SAI and the absolute azimuth αai of the target I with respect to the observer a may be calculated from the two-point distance equation. Whereas the angle βai between the helmet azimuth coordinate system X-Y of the observer and the world coordinate system X-Y can be obtained by the gyroscope M13 arranged on the helmet M4, the relative azimuth angle γai=αai- βai of the target I with respect to the observer a is obtained. And outputting the obtained relative distance SAI and relative azimuth angle gamma AI of the invisible target I to the information fusion display module M3.
(3) Presenting observer a with situational information for visible objects J, K and invisible objects I
As shown in fig. 5 and 6, the situation information presentation effect of the penetration type virtual-real fusion is realized for the target J, K visible to the observer a by using the AR glasses M33, and information such as relative distance, category and the like is marked near the positioning frame of the real target; presenting the invisible target I in a superposition type auxiliary labeling mode; meanwhile, situation information of visible/invisible targets can be displayed on the whole situation map at the same time.
The embodiment takes two observers as an illustration, and the situation of three or more observers not only needs more communication interactions to transmit situation information, but also does not influence the functional architecture of the intelligent acquisition module of the target situation information and the effectiveness of the intelligent acquisition module of the target situation information.
While the technical content and features of the present invention have been disclosed above, those skilled in the art may make various substitutions and modifications based on the teachings and disclosure of the present invention without departing from the spirit of the present invention. Accordingly, the scope of the present invention should not be limited to the embodiments disclosed, but should include various alternatives and modifications without departing from the invention and be covered by the claims of the present application.

Claims (7)

1. The intelligent target situation information acquisition system based on the wearable equipment is characterized by comprising an intelligent target situation information acquisition module A1 and a basic wearable equipment support module A2, wherein the intelligent target situation information acquisition module A1 comprises an information acquisition sensing module M1, an information calculation processing module M2 and an information fusion display module M3, and is used for displaying the positioning frame, the relative distance and the category situation information of visible targets in a visual field for an observer in real time and displaying the situation information superposition type labels of the relative positions and the distances of the visible targets and the invisible targets; the wearable equipment foundation support module A2 comprises a helmet body M4, an embedded computing unit M5, a wireless communication terminal M6 and a power supply M7, and provides structure, calculation power, communication and energy support for the target situation information intelligent acquisition module A1;
the information fusion display module M3 comprises a visible target situation penetrating virtual-actual fusion display module M31, an invisible target situation overlapped marking display module M32 and an AR glasses module M33;
the visible target situation penetrating virtual-real fusion display module M31 receives situation information of a positioning frame, a category and a relative distance value of the visible target output by the fusion information calculation processing module M2, outputs the situation information to the AR glasses module M33 for penetrating virtual-real fusion presentation, and shares the situation information to the invisible target situation superposition labeling display module M32;
the invisible target situation superposition type labeling display module M32 receives the relative distance and azimuth information of the invisible target of the relative observer output by the information calculation processing module M2 and the visible target situation information output by the module M31, and outputs the information to the AR glasses module M33 to display the invisible target situation information in a superposition type labeling mode, so that a visible invisible target overall situation map is synthesized; the AR glasses module M33 adopts an optical waveguide scheme, so that the display effect of the overall situation map of the visible/invisible target and the sense of the environment of an observer are enhanced;
the visible target situation penetrating virtual-real fusion display module M31 comprises a target positioning frame and category display module M311 and a target relative observer distance display module M312; the target positioning frame and category display module M311 processes the data by the adaptive calculation calibration method P2 of the target relative distance, and outputs the corrected data to the target relative observer distance display module M312 to obtain a calculated target object relative distance value;
the adaptive calculation calibration method P2 for the target relative distance specifically includes the following steps:
step P21: obtaining a target relative distance value according to pixel weighted average in a target positioning frame according to the depth map information;
step P22: judging whether the target relative distance value is in the effective range, if so, outputting the current target relative distance value and executing the step P23, otherwise, executing the step P25;
step P23: selecting target objects of the same class to form a calibration reference set;
step P24: calculating statistically to obtain a functional relation between the height and the distance value of the target positioning frame, and executing the step P27;
step P25: judging whether the height pixel value of the target positioning frame is smaller than 32, if yes, executing a step P27, otherwise, executing a step P26;
step P26: the link does not output relative distance information, and the P2 algorithm is ended;
step P27: calibrating the distance value of the target exceeding the effective range by utilizing the functional relation obtained in the step P24;
step P28: and outputting the target relative distance value.
2. The intelligent acquisition system for target situation information based on wearable equipment according to claim 1, wherein the acquisition sensing module M1 in the intelligent acquisition module for target situation information A1 comprises a visible light image sensing module M11, a geographic position signal acquisition module M12 and an azimuth signal acquisition module M13, and provides images, position coordinates and azimuth information for the information processing module M2; the visible light image sensing module M11 adopts a binocular camera, the geographic position signal acquisition module M12 acquires the position information of an observer and the position information of an invisible target object, the azimuth signal acquisition module M13 adopts a gyroscope, and the visible light image sensing module M11, the geographic position signal acquisition module M12 and the azimuth signal acquisition module M13 are arranged on the helmet body M4.
3. The intelligent acquisition system for target situation information based on wearable equipment according to claim 2, wherein the visible light image sensing module M11 is provided with an eye movement attention tracking device, and the method for acquiring the target area image by the device is as follows:
firstly, identifying the pupil center and the cornea reflection center on an eye image, extracting the pupil and the cornea in the shot image, taking the cornea reflection center as a base point of the relative positions of an eye tracking camera and an eyeball, and enabling the pupil center position coordinate to represent the gaze point position;
then, determining the direction of the line of sight by using the relative positions of the light spot and the pupil; the method comprises the following steps: the relative position determines the gaze direction through a gaze mapping function model, which is as follows:
wherein,for gaze point coordinates>Reflecting the spot vector for the pupil;
and then, according to the sight line direction, obtaining the gaze point center of the observer by using the pupil cornea reflection technology, and performing region expansion by using the gaze point center to obtain the target region image.
4. The intelligent acquisition system of target situation information based on wearable equipment according to claim 1, wherein the information calculation processing module M2 comprises an image preprocessing module M21, a target detection and identification module M22, a relative distance and absolute azimuth calculation module M23 and a relative azimuth calculation module M24;
the image preprocessing module M21 receives and processes the original image of the visible target in the visual field of the observer, which is output by the information acquisition and perception module M1, and inputs the optimized image after noise reduction and definition to the target detection and recognition module M22; the target detection and recognition module M22 utilizes a target detection and recognition method P1 based on a lightweight integrated architecture to calculate and output a target positioning frame and category information on the embedded computing unit M5 in real time;
the relative distance and absolute azimuth calculation module M23 and the relative azimuth calculation module M24 are configured to receive and process the position information of the observer, the position information of the invisible target object, and the helmet azimuth information of the observer, which are output by the information acquisition and sensing module M1.
5. The intelligent acquisition system for target situation information based on wearable equipment according to claim 4, wherein the relative distance and absolute azimuth calculation module M23 calculates a relative distance and an absolute azimuth to an invisible target object from position information of observers and position information of invisible target objects acquired from other observers through the wireless communication terminal M6;
then, a relative azimuth angle calculation module M24 is utilized to calculate the distance and the relative azimuth angle between the observer and the invisible target by utilizing the relative distance and the azimuth angle of the invisible target and combining the helmet azimuth angle information of the observer, which is acquired by a gyroscope M13;
the method for calculating the relative distance and the azimuth angle of the invisible target P3 utilizes the relative distance and the absolute azimuth angle of the invisible target object obtained by the relative distance and absolute azimuth angle calculating module M23, combines the azimuth angle information of the gyroscope M13, and obtains the distance and the azimuth angle of the invisible target relative to an observer in real time through a coordinate calculating and converting quick algorithm.
6. The intelligent target situation information acquiring system based on wearable equipment according to claim 4, wherein the target detection and recognition method P1 based on lightweight integrated architecture comprises the following steps:
step P11: after obtaining a normalized sample after image preprocessing optimization, constructing a lightweight characteristic extraction deep learning network model, wherein the model comprises a lightweight depth separable main network mobilent, and the lightweight depth separable main network mobilent is constructed by using a depth separable convolution method; depth separable convolution methods include depthwise convolution and pointwise convolution; according to the depth separable convolution method, depthwise convolution is adopted to respectively convolve different input channels, pointwise convolution is adopted to combine convolved outputs, and a MobileNet neural network is adopted to combine the input images to obtain feature images with different scales;
step P12: densely sampling the feature map by adopting an SSD single-stage target detection neural network frame to convert the feature map into a plurality of candidate detection frames and category confidence degrees;
step P13: non-maximum suppression non-maximum suppression is performed on the objects in the prediction frame;
step P14: and outputting the positioning frame and the category information of the target.
7. The intelligent target situation information acquisition system based on wearable equipment according to claim 4, wherein the target detection and recognition method P1 based on a lightweight integrated architecture integrates a lightweight feature extraction deep learning network model with context information, combines a focal loss function and a migration learning method for model training, combines a target tracking model to obtain a small target detection and tracking model, and obtains new image information through the small target detection and tracking model by a small target detail image;
the fusion context information specifically comprises: adding an FPN context information feature fusion algorithm to a lightweight neural network target detection model, wherein the algorithm fuses feature images with different scales by adopting a top-down path, and semantically fuses the feature pyramid space of the upper layer to the feature pyramid bottom space of the lower layer;
the focal loss function formula is as follows:
wherein the focus parameter,/>Representing the real category->Is of the category ofyProbability values predicted by class time model, +.>Is of the category ofyA class-time weighting factor;
the target tracking model is constructed based on a Kalman filtering algorithm: firstly, establishing a state equation based on a Kalman filtering cooperative algorithm, then inputting a small target detail image as the state equation, and adjusting parameters of the state equation according to an output result of the state equation.
CN202011389561.7A 2020-12-02 2020-12-02 Target situation information intelligent acquisition system based on wearable equipment Active CN112489138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011389561.7A CN112489138B (en) 2020-12-02 2020-12-02 Target situation information intelligent acquisition system based on wearable equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011389561.7A CN112489138B (en) 2020-12-02 2020-12-02 Target situation information intelligent acquisition system based on wearable equipment

Publications (2)

Publication Number Publication Date
CN112489138A CN112489138A (en) 2021-03-12
CN112489138B true CN112489138B (en) 2024-02-20

Family

ID=74938629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011389561.7A Active CN112489138B (en) 2020-12-02 2020-12-02 Target situation information intelligent acquisition system based on wearable equipment

Country Status (1)

Country Link
CN (1) CN112489138B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11907521B2 (en) 2021-01-28 2024-02-20 Samsung Electronics Co., Ltd. Augmented reality calling interface
CN113506027A (en) * 2021-07-27 2021-10-15 北京工商大学 Course quality assessment and improvement method based on student visual attention and teacher behavior

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105812778A (en) * 2015-01-21 2016-07-27 成都理想境界科技有限公司 Binocular AR head-mounted display device and information display method therefor
WO2017173735A1 (en) * 2016-04-07 2017-10-12 深圳市易瞳科技有限公司 Video see-through-based smart eyeglasses system and see-through method thereof
CN109766769A (en) * 2018-12-18 2019-05-17 四川大学 A kind of road target detection recognition method based on monocular vision and deep learning
CN111294586A (en) * 2020-02-10 2020-06-16 Oppo广东移动通信有限公司 Image display method and device, head-mounted display equipment and computer readable medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040104935A1 (en) * 2001-01-26 2004-06-03 Todd Williamson Virtual reality immersion system
EP1946243A2 (en) * 2005-10-04 2008-07-23 Intersense, Inc. Tracking objects with markers
WO2019143844A1 (en) * 2018-01-17 2019-07-25 Magic Leap, Inc. Eye center of rotation determination, depth plane selection, and render camera positioning in display systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105812778A (en) * 2015-01-21 2016-07-27 成都理想境界科技有限公司 Binocular AR head-mounted display device and information display method therefor
WO2017173735A1 (en) * 2016-04-07 2017-10-12 深圳市易瞳科技有限公司 Video see-through-based smart eyeglasses system and see-through method thereof
CN109766769A (en) * 2018-12-18 2019-05-17 四川大学 A kind of road target detection recognition method based on monocular vision and deep learning
CN111294586A (en) * 2020-02-10 2020-06-16 Oppo广东移动通信有限公司 Image display method and device, head-mounted display equipment and computer readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Driving Instruction and Training of Welfare Vehicle controlled by Virtual Platoon Scheme using Sharing System of AR;Yudai Takeuchi et al.;2019 19th International Conference on Control, Automation and Systems (ICCAS);全文 *

Also Published As

Publication number Publication date
CN112489138A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
US10229511B2 (en) Method for determining the pose of a camera and for recognizing an object of a real environment
CN112489138B (en) Target situation information intelligent acquisition system based on wearable equipment
US10510137B1 (en) Head mounted display (HMD) apparatus with a synthetic targeting system and method of use
CN107105333A (en) A kind of VR net casts exchange method and device based on Eye Tracking Technique
US11234096B2 (en) Individualization of head related transfer functions for presentation of audio content
CN106327584B (en) Image processing method and device for virtual reality equipment
CN108478184A (en) Eyesight measurement method and device, VR equipment based on VR
CN109545003B (en) Display method, display device, terminal equipment and storage medium
CN108681699A (en) A kind of gaze estimation method and line-of-sight estimation device based on deep learning
CN108259887A (en) Watch point calibration method and device, blinkpunkt scaling method and device attentively
US20200341284A1 (en) Information processing apparatus, information processing method, and recording medium
CN112507840A (en) Man-machine hybrid enhanced small target detection and tracking method and system
CN107422844A (en) A kind of information processing method and electronic equipment
CN108064447A (en) Method for displaying image, intelligent glasses and storage medium
CN113467619A (en) Picture display method, picture display device, storage medium and electronic equipment
US11785411B2 (en) Information processing apparatus, information processing method, and information processing system
CN112400148A (en) Method and system for performing eye tracking using off-axis cameras
CN109917908B (en) Image acquisition method and system of AR glasses
CN116883436A (en) Auxiliary understanding method and system based on sight estimation
CN112651270A (en) Gaze information determination method and apparatus, terminal device and display object
WO2022176450A1 (en) Information processing device, information processing method, and program
CN113743172B (en) Personnel gazing position detection method and device
CN111654688B (en) Method and equipment for acquiring target control parameters
CN111222448B (en) Image conversion method and related product
CN215821381U (en) Visual field auxiliary device of AR & VR head-mounted typoscope in coordination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 222061 No.18, Shenghu Road, Haizhou District, Lianyungang City, Jiangsu Province

Applicant after: The 716th Research Institute of China Shipbuilding Corp.

Address before: 222061 No.18, Shenghu Road, Haizhou District, Lianyungang City, Jiangsu Province

Applicant before: 716TH RESEARCH INSTITUTE OF CHINA SHIPBUILDING INDUSTRY Corp.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant