CN113409218B - Wearable protective tool and scene presenting method for wearable protective tool - Google Patents

Wearable protective tool and scene presenting method for wearable protective tool Download PDF

Info

Publication number
CN113409218B
CN113409218B CN202110712665.5A CN202110712665A CN113409218B CN 113409218 B CN113409218 B CN 113409218B CN 202110712665 A CN202110712665 A CN 202110712665A CN 113409218 B CN113409218 B CN 113409218B
Authority
CN
China
Prior art keywords
component
augmented reality
target
scene
infrared image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110712665.5A
Other languages
Chinese (zh)
Other versions
CN113409218A (en
Inventor
黄照森
孙旷野
赵川
方俊慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Haikang Fire Technology Co ltd
Original Assignee
Hangzhou Haikang Fire Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Haikang Fire Technology Co ltd filed Critical Hangzhou Haikang Fire Technology Co ltd
Priority to CN202110712665.5A priority Critical patent/CN113409218B/en
Publication of CN113409218A publication Critical patent/CN113409218A/en
Application granted granted Critical
Publication of CN113409218B publication Critical patent/CN113409218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a wearable protective tool and a scene presenting method for the wearable protective tool. Based on this application embodiment, wearable protective equipment can have the infrared detection subassembly that is used for supplying target object to observe the augmented reality subassembly of real scene, and is used for exporting the infrared image of real scene, wherein, the infrared detection subassembly is through the infrared image that the temperature of the scene target in the perception real scene was exported, can be converted into the target infrared image that matches with the optical field angle of view of augmented reality subassembly through the processing of processing subassembly, consequently, the augmented reality subassembly can strengthen the correct reproduction of real scene in the target object visual field through the target infrared image that presents the conversion and obtains, thereby, can strengthen the visual identification degree of real scene, and then help alleviating or even eliminate the puzzlement that real scene is difficult to be clearly known.

Description

Wearable protective tool and scene presenting method for wearable protective tool
Technical Field
The present disclosure relates to a protective device technology for disaster relief, and more particularly, to a wearable protective device and a scene presenting method for the wearable protective device.
Background
In some disaster scenarios, such as a fire scene, a large amount of smoke is often diffused, and may be accompanied by dazzling fire, and these environmental factors may cause visual interference, which makes it difficult to clearly know the real environment, and may result in rescue failure.
Therefore, how to solve the problem that a real scene (for example, a disaster scene) is difficult to be clearly known becomes a technical problem to be solved in the prior art.
Disclosure of Invention
In an embodiment of the present application, in order to solve the technical problem or at least partially solve the technical problem, a wearable supporter and a scene presenting method for the wearable supporter are provided to enhance the visual recognition of a real scene.
Wearable gear provided in one of the embodiments can include: an augmented reality component for a target object to observe a real scene; the infrared detection component is used for outputting an infrared image of the real scene, and a visual field corresponding to an imaging visual field angle of the infrared detection component and a visual field corresponding to an optical visual field angle of the augmented reality component are overlapped in the same direction, wherein the optical visual field angle is the visual field angle of a target object for observing the real scene through the augmented reality component; the processing component is used for processing the infrared image based on the conversion between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component to obtain a target infrared image matched with the optical field angle of the augmented reality component, and sending the target infrared image to the augmented reality component; the augmented reality component is further configured to present the target infrared image and enable a scene target existing in the target infrared image to correspond to a pose of the scene target in the real scene.
Optionally, the processing component is specifically configured to: processing the infrared image based on a conversion relation between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly to obtain the target infrared image matched with the optical field angle of the augmented reality assembly; wherein the conversion relationship is at least used for defining an infrared image cropping size corresponding to a difference between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component.
Optionally, the processing component is specifically configured to: determining a calibration reference object in the real scene; calibrating a predetermined calibration conversion relationship between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly based on a difference value between a distance of the calibration reference object relative to the wearable protective equipment and a preset calibration distance of the wearable protective equipment to obtain a calibration conversion relationship; and processing the infrared image based on the calibration conversion relation to obtain the target infrared image matched with the optical field angle of the augmented reality component.
Optionally, the processing component is further configured to: and performing enhancement processing on the outline of the scene target appearing in the target infrared image, so that the outline of the scene target is displayed in an enhanced manner in the target infrared image presented by the augmented reality component. Optionally, the wearable shield further comprises a brightness detection component for detecting ambient brightness in the real scene; the processing component is further to: generating a backlight brightness adjusting instruction according to the environment brightness, and sending the backlight brightness adjusting instruction to the augmented reality component; the augmented reality component is further to: and adjusting the backlight brightness according to the backlight brightness adjusting instruction so as to present the target infrared image based on the adjusted backlight brightness. Optionally, the wearable shield further comprises a position sensing component and a wireless communication component; wherein the position sensing component is used for sensing spatial position information of the wearable protective tool; and the wireless communication component is used for remotely transmitting the spatial position information so that a remote server can maintain the movement track of the wearable protective tool by using the spatial position information, and the remote server is also used for presenting a visual navigation instruction generated based on the movement track in a rear wearable protective tool, wherein the starting movement time of the rear wearable protective tool is later than the starting movement time of the wearable protective tool. For example, the position sensing assembly includes a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetic sensor. Optionally, the wearable gear further comprises a distance detection component for detecting a distance of a scene object in the real scene relative to the wearable gear; the processing component is further to: the distance detected by the distance detection component is associated with the scene target, and an associated processing result is sent to the augmented reality component; the augmented reality component is further to: and performing associated presentation on the distance and the scene target based on the associated processing result. Optionally, the processing component is further configured to: generating a switching instruction of the image presentation type of the target infrared image, and sending the switching instruction to the augmented reality component; the augmented reality component is further to: and presenting the target infrared image according to an image presentation type corresponding to the switching instruction, wherein the image presentation type comprises a thermal image type or a gray image type.
Optionally, the processing component is further configured to: and generating the switching instruction according to the temperature information of the real scene detected by the infrared detection assembly. Optionally, the wearable supporter further includes a mode switch, and the mode switch is configured to respond to an external touch operation, generate a switching request of an image presentation type of the target infrared image, and send the switching request to the processing component; the processing component is further to: and generating the switching instruction according to the switching request.
Optionally, the wearable shield further comprises a gas detection component for detecting a gas composition parameter in an environment in which the wearable shield is located; the processing component is further to: generating a breath switching instruction according to the gas composition parameters, and sending the breath switching instruction to a breather valve; the breather valve is used for: and determining a communication state according to the breathing switching instruction, wherein the communication state comprises a state of communicating with the atmosphere in the environment where the wearable protective equipment is located or a state of communicating with a gas storage bottle.
Optionally, the wearable shield further comprises a gas detection component for detecting a gas composition parameter in an environment in which the wearable shield is located; the processing component is further to: generating breathing warning information according to the gas composition parameters, and sending the breathing warning information to the augmented reality component; the augmented reality component is further to: and presenting the breathing warning information, wherein the breathing warning information comprises information content for representing that dangerous gas exists in the environment where the wearable protective equipment is located.
Optionally, the processing component is further configured to: generating gas surplus prompt information according to the gas surplus in the gas storage bottle, and sending the gas surplus prompt information to the augmented reality component; the augmented reality component is further to: and presenting the gas residual amount prompt information.
A scene presentation method for a wearable brace provided in another embodiment can include: acquiring an infrared image of a real scene output by an infrared detection component, wherein a visual field corresponding to an imaging visual field angle of the infrared detection component and a visual field corresponding to an optical visual field angle of an augmented reality component are overlapped in the same direction, the augmented reality component is used for a target object to observe the real scene, and the optical visual field angle is the visual field angle of the target object for observing the real scene through the augmented reality component; processing the infrared image based on the conversion between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly to obtain a target infrared image matched with the optical field angle of the augmented reality assembly; and sending the target infrared image to the augmented reality component so that the augmented reality component presents the target infrared image, wherein the augmented reality component is further used for enabling a scene target existing in the target infrared image to correspond to the pose of the scene target in the real scene.
Optionally, the processing the infrared image based on the conversion between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component to obtain a target infrared image matched with the optical field angle of the augmented reality component specifically includes: processing the infrared image based on a conversion relation between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly to obtain the target infrared image matched with the optical field angle of the augmented reality assembly; wherein the conversion relationship is at least used for defining an infrared image cropping size corresponding to the difference between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component.
Optionally, the processing the infrared image based on the conversion relationship between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component to obtain the target infrared image matched with the optical field angle of the augmented reality component specifically includes: determining a calibration reference object in the real scene; calibrating a predetermined calibration conversion relationship between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly based on a difference value between a distance of the calibration reference object relative to the wearable protective equipment and a preset calibration distance of the wearable protective equipment to obtain a calibration conversion relationship; and processing the infrared image based on the calibration conversion relation to obtain the target infrared image matched with the optical field angle of the augmented reality component.
Optionally, the method further comprises: performing enhancement processing on the outline of the scene target appearing in the target infrared image so that the outline of the scene target is displayed in an enhanced manner in the target infrared image presented by the augmented reality component; alternatively, the method further comprises: generating a backlight brightness adjusting instruction according to the environment brightness in the real scene and the environment brightness, and sending the backlight brightness adjusting instruction to the augmented reality component; the augmented reality component is used for adjusting backlight brightness according to the backlight brightness adjusting instruction and presenting the target infrared image based on the adjusted backlight brightness; alternatively, the method further comprises: remotely transmitting spatial position information of a wearable protector, so that the remote server maintains a movement track of the wearable protector by using the spatial position information, and the remote server is further configured to send a visual navigation instruction generated based on the movement track to a rear wearable protector, wherein a start movement time of the rear wearable protector is later than a current start movement time of the wearable protector; alternatively, the method further comprises: associating the distance between the scene target in the real scene and the wearable protective tool with the scene target, and sending an association processing result to the augmented reality component; and the augmented reality component is used for performing associated presentation on the distance and the scene target based on the associated processing result.
Optionally, the method further comprises: generating a switching instruction of the image presentation type of the target infrared image, and sending the switching instruction to the augmented reality component; and the augmented reality component is used for presenting the target infrared image according to the image presentation type corresponding to the switching instruction, wherein the image presentation type comprises a thermal image type or a gray image type.
Optionally, the instruction for switching the image presentation type of the generated target infrared image specifically includes:
generating the switching instruction according to the temperature information of the real scene; or generating the switching instruction according to the switching request of the image presentation type of the target infrared image; the switching request is generated by the mode switch in response to an external touch operation.
Optionally, the method further comprises: generating a breathing switching instruction according to the gas composition parameters in the environment where the wearable protective equipment is located, and sending the breathing switching instruction to a breathing valve; the breather valve is used for determining a communication state according to the breathing switching instruction, wherein the communication state comprises a state communicated with the atmosphere in the environment where the wearable protective equipment is located or a state communicated with a gas storage bottle. Optionally, the method further comprises: generating breathing warning information according to the gas composition parameters, and sending the breathing warning information to the augmented reality component; the augmented reality component is used for presenting the breathing warning information, wherein the breathing warning information comprises information content used for representing that dangerous gas exists in the environment where the wearable protective equipment is located; alternatively, the method further comprises: generating gas surplus prompt information according to the gas surplus in the gas storage bottle, and sending the gas surplus prompt information to the augmented reality component; the augmented reality component is used for presenting the gas residual amount prompt information.
There is also provided in another embodiment a non-transitory computer readable storage medium storing instructions that when executed by a processor cause the processor to perform the scene presentation method for wearable gear as described in the previous embodiments.
Based on the above embodiment, the wearable protector may have an augmented reality component for allowing the target object to observe the real scene, and an infrared detection component for outputting an infrared image of the real scene, wherein the infrared image output by the infrared detection component by sensing the temperature of the scene target in the real scene may be converted into a target infrared image matching the optical field angle of the augmented reality component by processing by the processing component, and therefore, the augmented reality component may enhance correct reproduction of the real scene in the field of the target object by presenting the converted target infrared image, thereby enhancing the visual recognition degree of the real scene, and further helping to alleviate or even eliminate the trouble that the real scene is difficult to be clearly known.
Drawings
The following drawings are only schematic illustrations and explanations of the present application, and do not limit the scope of the present application:
fig. 1 is an exemplary partial structural schematic of a wearable protector in one embodiment;
fig. 2 is a schematic diagram of a simplified presentation mechanism suitable for use with the wearable brace shown in fig. 1;
fig. 3 is a schematic diagram of a preferred presentation mechanism of the wearable guard as shown in fig. 1;
FIGS. 4a and 4b are schematic diagrams of the preferred rendering mechanism shown in FIG. 3 with further introduction of a backlight adjustment function;
FIG. 5 is a schematic diagram of the preferred presence mechanism shown in FIG. 3 further incorporating mode switching functionality;
FIG. 6 is a schematic diagram of the preferred rendering mechanism shown in FIG. 3 further incorporating local enhancement functionality;
FIGS. 7a and 7b are schematic diagrams of the preferred rendering mechanism shown in FIG. 3 with further introduction of distance visualization functionality
Fig. 8 is a schematic diagram of a position reporting mechanism of the wearable supporter shown in fig. 1;
fig. 9 is a schematic diagram of a trajectory navigation mechanism of the wearable protector shown in fig. 1;
fig. 10 is a schematic diagram of a respiratory protection mechanism of the wearable suit shown in fig. 1;
fig. 11 is a perspective view of an example structure of the wearable guard shown in fig. 1;
FIG. 12 is a schematic diagram of an exploded view of the exemplary structure of FIG. 11;
fig. 13 is an exemplary flow diagram of a scene presentation method for wearable gear in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below by referring to the accompanying drawings and examples.
Fig. 1 is an exemplary partial structural schematic of a wearable brace in one embodiment. Referring to fig. 1, in this embodiment, the wearable brace may have an augmented reality component 30, an infrared detection component 51, and a processing component 500.
The augmented reality component 30 is used for the target object to observe a real scene. Specifically, the augmented reality assembly 30 may include a frame 31 and an optical waveguide lens 32, the optical waveguide lens 32 may be embedded in the frame 31, and since the optical waveguide lens 32 includes an optical transparent medium substrate and a medium film covering both side surfaces of the optical transparent medium substrate, the optical waveguide lens 32 has a light transmission characteristic, so that a target object may view a real scene through the optical waveguide lens 32. Illustratively, the target object may include a rescuer and may also include other devices having a scene observation function. Real scenes may include disaster scenes such as fire, smoke environments, and other scenes where environmental detection may be performed using infrared detection components.
The augmented reality module 30 has a pre-configured optical field angle FOV _ ar, which is a field angle at which the target object observes the real scene through the augmented reality module 30 (i.e., the optical waveguide lens 32), that is, an intersection portion of the light receiving range of the optical waveguide lens 32 in the original field angle of the target object when the target object wears the augmented reality module 30. Taking the target object wearing the wearable protector as the rescuer as an example, the binocular of the rescuer can observe a real scene (e.g., a disaster scene such as a fire scene environment) through the augmented reality assembly 30 (i.e., the optical waveguide lens 32) in the visual field corresponding to the optical field angle FOV _ ar. The optical field angle FOV _ ar of the augmented reality module 30 may be configured by setting the dimensional specification of the optical waveguide lens 32 and the distance of the optical waveguide lens 32 from the target object (i.e., the dimensional specification of the lens frame 31).
The augmented reality assembly 30 may further include a display driving module 33, the display driving module 33 may be installed outside the frame 31 and electrically connected to the optical waveguide lens 32, wherein the display driving module 33 at least includes a light engine, the light engine may generate light waves to the optical waveguide lens 32, the light waves are constrained by a dielectric film of the optical waveguide lens 32 to propagate in the optically transparent dielectric substrate, so that a specific image that does not obstruct a target object from observing a real scene may be presented in the optical waveguide lens 32, thereby implementing an enhancement of reality.
The infrared detection component 51 is used for outputting an infrared image of a real scene, for example, acquiring an infrared image in a smoke environment. Specifically, the infrared detection component 51 may include an infrared detector, which may be integrated with a sensor array for infrared imaging, and through sensing the temperature in the real scene by the sensor array of the infrared detector, an infrared image may be obtained, and the pixel value of the infrared image is determined according to the temperature. The mounting position of the infrared detection unit 51 is not particularly limited, and may be mounted on one side of the lens frame 31, for example.
Wherein, the infrared detection component 51 has a preconfigured imaging field angle FOV _ inf, which is determined by the field of view range of the lens of the infrared detector, in general, the imaging field angle FOV _ inf of the infrared detection component 51 may be larger than the optical field angle FOV _ ar of the augmented reality component 30.
Moreover, the field of view corresponding to the imaging field angle FOV _ inf of the infrared detection component 51 and the field of view corresponding to the optical field angle FOV _ ar of the augmented reality component 30 overlap in the same direction. The field of view corresponding to the imaging field of view FOV _ inf of the infrared detection assembly 51 and the field of view corresponding to the optical field of view FOV _ ar of the augmented reality assembly 30 are in the same direction, which can be understood as that the field of view corresponding to the imaging field of view FOV _ inf of the infrared detection assembly 51 and the field of view corresponding to the optical field of view FOV _ ar of the augmented reality assembly 30 face the same side of the wearable supporter; in addition, the "overlap" between the field of view corresponding to the imaging field angle FOV _ inf of the infrared detection assembly 51 and the field of view corresponding to the optical field angle FOV _ ar of the augmented reality assembly 30 may be understood as the overlap between the field of view corresponding to the imaging field angle FOV _ inf of the infrared detection assembly 51 and the field of view corresponding to the optical field angle FOV _ ar of the augmented reality assembly 30 within a specified distance range (specifically, related to the preset calibration distance of the wearable supporter) with respect to the wearable supporter.
As shown in fig. 2, the infrared image 60 output by the infrared detection component 51 may include the scene object 40 in the real scene, and if the infrared image 60 is presented on the augmented reality component 30, that is, the infrared image 60 is presented in a fusion manner together with the real scene observed by the target object, the light engine included in the display driving module 33 may generate optical frequency electromagnetic waves to the optical waveguide lens 32 according to the infrared image 60, so as to generate the visual effect of enhanced presentation on the scene object 40 in the real scene observed by the target object. The scene objects in the real scene mentioned in the embodiments of the present application may include any objects appearing in the real scene, and may be subdivided according to the specific type of the real scene. For example, for disaster scenes such as fire and smoke environments, scene objects in real scenes may include people, road obstacles, vehicles, furniture or home appliances, and building structures (e.g., entrances, windows, walls, etc.), etc.
However, if the infrared image 60 is directly provided to the augmented reality component 30 for presentation, due to the difference between the optical field angle of the augmented reality component 30 and the imaging field angle of the infrared detection component 51, a scene object in the infrared image 60 presented by the augmented reality component 30 may deviate from the pose of the scene object in the real scene, thereby possibly interfering with or even misleading the correct recognition of the real scene by the target object.
As described above, the optical field angle FOV _ ar (including the horizontal field angle and the vertical field angle) of the augmented reality module 30 may be set according to the size specification of the optical waveguide lens 32 and the distance from the target object, and such setting will generally take into consideration the wearing comfort of the target object and will generally be smaller than the imaging field angle FOV _ inf (including the horizontal field angle and the vertical field angle) of the infrared detection module 51. Accordingly, if the infrared image 60 formed in the visual field range corresponding to the imaging field angle FOV _ inf is directly presented on the augmented reality module 30, the entire image size of the infrared image 60 is compressed by the adaptation of the optical field angle FOV _ ar, which may cause an inappropriate deformation or displacement of the scene object due to the compression of the entire image size. That is, if the infrared image 60 is directly provided to the augmented reality component 30 for rendering, the scene object 40 in the infrared image 60 is rendered to the target object with distortion due to the difference between the optical field angle FOV _ ar of the augmented reality component 30 and the field angle FOV _ inf of the infrared detection component 51.
Fig. 2 is a schematic diagram of a simplified rendering mechanism suitable for the wearable supporter shown in fig. 1, and particularly, for example, human eyes observe a real scene through the augmented reality component 30, which should not be construed as a specific limitation to the embodiment of the present application. Referring to fig. 2 in conjunction with fig. 1, fig. 2 shows an effect of directly providing the infrared image 60 to the augmented reality assembly 30, as can be seen from fig. 2, the augmented reality module 30 can present an image contour 41 'of the scene object 40 in the infrared image 60 in a field of view corresponding to the optical field angle FOV _ ar, but due to the angle difference of the field angle, the image contour 41' may be seriously deviated from the projection contour 42 of the scene object 40 on the augmented reality assembly 30 (i.e. the optical waveguide lens 32) along the observation optical path of the object, so that the image contour 41 'reflects the false pose shadow object 40' in the visual perception of the object, and cannot reflect the real pose of the scene object 40.
If the real scene has a certain visual visibility relative to the target object, the ghost target 40' will disturb the judgment of the target object on the scene target 40; for example, in a fire rescue scenario, rescue of trapped persons by rescuers may be delayed.
If the visual visibility of the real scene relative to the target object is very low, the target object can only observe the Virtual object 40' with distorted pose, and at this time, for the target object, the infrared image 60 directly presented on the augmented Reality component 30 is equivalent to generate a VR (Virtual Reality) effect deviating from the real scene.
It can be seen that providing infrared images 60 directly to augmented reality component 40 for presentation is not conducive to target objects knowing the real environment, regardless of visibility in the real scene.
Accordingly, in this embodiment, the infrared image 60 is processed accordingly using the processing component 500, resulting in a target infrared image for rendering at the augmented reality component 30. The Processing component 500 may include a Processor, which may have an image Processing capability, or the Processing component 500 may also include a controller without an image Processing capability and a Processor with an image Processing capability, for example, the Processor with an image Processing capability may be selected from a DSP (Digital Signal Processor), a GPU (Graphics Processing Unit), and the like.
Fig. 3 is a schematic diagram of a preferred presentation mechanism of the wearable gear as shown in fig. 1. Referring to fig. 3 in conjunction with fig. 1, the processing component 500 is configured to acquire the infrared image 60 output by the infrared detection component 51, process the infrared image 60 output by the infrared detection component 51 based on the conversion between the imaging field angle FOV _ inf of the infrared detection component 51 and the optical field angle FOV _ ar of the augmented reality component 30, obtain the target infrared image 80 matching the optical field angle FOV _ ar of the augmented reality component 30, and send the target infrared image 80 to the augmented reality component 30.
In order to process the infrared image 60 based on the conversion between the imaging field angle FOV _ inf of the infrared detection component 51 and the optical field angle FOV _ ar of the augmented reality component 30, so as to obtain the target infrared image 80 matched with the optical field angle FOV _ ar of the augmented reality component 30, the processing component 500 may be specifically configured to:
based on the conversion relationship between the imaging field angle FOV _ inf of the infrared detection component 51 and the optical field angle FOV _ ar of the augmented reality component 30, the infrared image 60 is processed to obtain the target infrared image 80 matched with the optical field angle FOV _ ar of the augmented reality component 30. The conversion relationship is at least used to define an infrared image cropping size corresponding to a difference between the imaging field angle FOV _ inf of the infrared detection component 51 and the optical field angle FOV _ ar of the augmented reality component 30. It should be understood that if other image editing operations, such as rotation, translation, etc., are required to be performed on the infrared image 60 to obtain the target infrared image 80 matching the optical field angle FOV _ ar of the augmented reality component 30, the foregoing conversion relationship may also be defined to include the rotation amount and the translation amount of the infrared image.
The conversion relationship between the imaging field angle FOV _ inf of the infrared detection assembly 51 and the optical field angle FOV _ ar of the augmented reality assembly 30 may include a predetermined calibration conversion relationship, or a calibration conversion relationship obtained by calibrating (or correcting) the calibration conversion relationship during the actual use of the wearable protector. The calibration conversion relationship may comprise a calibration conversion relationship determined by calibrating the wearable gear (i.e., calibrating the infrared detection assembly 51 and the augmented reality assembly 30) without regard to a deployment position deviation between the infrared detection assembly 51 and the augmented reality assembly 30, or may comprise a calibration conversion relationship determined by calibrating the wearable gear with regard to a deployment position deviation between the infrared detection assembly 51 and the augmented reality assembly 30.
For example, the wearable shield may be calibrated by a technician before shipment from the factory, for example, the wearable shield may be calibrated using a preset calibration object in a calibration environment at a preset calibration distance from the wearable shield to determine a calibration conversion relationship determined without consideration of a deployment position offset between the infrared detection component 51 and the augmented reality component 30 or with consideration of a deployment position offset between the infrared detection component 51 and the augmented reality component 30. The deployment position deviation means that the infrared detection component 51 and the augmented reality component 30 (i.e., the optical waveguide lens 32) are mutually avoided at the installation position of the wearable protector, that is, the optical axis position does not coincide to ensure the respective function to be realized. When the deployment position deviation between the infrared detection component 51 and the augmented reality component 30 has a small influence on the presentation effect of the infrared image 60 in the augmented reality component 30, it may be disregarded.
The difference between the imaging field angle FOV _ inf of the infrared detection module 51 and the optical field angle FOV _ ar of the augmented reality module 30 includes a horizontal field angle component difference value and a vertical field angle component difference value, and accordingly, in the foregoing conversion relationship, the infrared image cropping size includes a cropping size in the image horizontal direction and/or the image longitudinal direction (i.e., the direction perpendicular to the horizontal direction).
Specifically, if the vertical field angle component of the imaging field angle FOV _ inf of the infrared detection component 51 is greater than the vertical field angle component of the optical field angle FOV _ ar of the augmented reality component 30, the cropping size of the infrared image 60 in the image longitudinal direction may be determined according to the difference between the two vertical field angle components; similarly, if the horizontal field angle component of the imaging field angle FOV _ inf of the infrared detection component 51 is greater than the horizontal field angle component of the optical field angle FOV _ ar of the augmented reality component 30, the cropping size of the infrared image 60 in the image horizontal direction may be determined according to the difference between the horizontal field angle components of the two. For example, the correspondence relationship between the difference value of the vertical field angle component and the difference value of the horizontal field angle component of the infrared detection module 51 and the augmented reality module 30 and the image cropping size may be preset, and then after determining the difference value of the vertical field angle component and the difference value of the horizontal field angle component, the image cropping size in the image longitudinal direction or the image horizontal direction may be calculated based on the correspondence relationship. In general, the cut-out sizes of both sides of the infrared image 60 may be the same for the image longitudinal direction or the image horizontal direction.
Based on the conversion relationship, the target infrared image 80 can be quickly obtained during the use of the wearable protector. As shown in fig. 3, an edge portion of the infrared image 60 may be cropped, and the cropped portion is represented in fig. 3 as a shaded region surrounding the target infrared image 80, so that the target infrared image 80 has an overall image size matching the optical field angle FOV ar of the augmented reality component 30. The size scale and shape of scene object 40 in infrared image 60 is not changed during the image cropping process, and scene object 40 in cropped object infrared image 80 is not improperly deformed or displaced when rendered on augmented reality component 30.
It is further preferred to consider that there may be a deployment position deviation between the infrared detection component 51 and the augmented reality component 30 (i.e., the optical waveguide lens 32), which may result in a position deviation between the optical axis of the infrared detection component 51 and the optical axis position of the augmented reality component 30 (i.e., the optical waveguide lens 32), or that the optical axis of the infrared detection component 51 intersects the optical axis of the augmented reality component 30 (i.e., the optical waveguide lens 32), so that, after the infrared image 60 is processed to obtain the target infrared image 80 without considering the deployment position deviation, there may be a distortion or an offset of the scene target 40 presented in the target infrared image 80 of the augmented reality component 30 relative to its posture in the real scene. From this, it can be understood that, when the deployment position deviation between the infrared detection component 51 and the augmented reality component 30 (i.e., the optical waveguide lens 32) is not considered, the default is to consider the position deviation between the optical axis of the infrared detection component 51 and the optical axis of the augmented reality component 30 or the intersection of the optical axes, that is, the position deviation between the two is considered to be negligible or the optical axes are parallel.
To reduce or even eliminate distortion or positional offset of the scene target 40 presented on the augmented reality assembly 30 caused by the deployment positional deviation, the infrared image 60 may be processed with a calibration conversion relationship determined by calibrating the wearable gear taking into account the deployment positional deviation between the infrared detection assembly 51 and the augmented reality assembly 30, resulting in a target infrared image 80 that matches the optical field angle FOV ar of the augmented reality assembly 30. In this case, specifically, at least the compensation amount of the infrared image cropping size determined based on the deployment position deviation between the infrared detection component 51 and the augmented reality component 30 is defined in the calibration conversion relationship. The deployment position deviation includes a deployment position deviation in the vertical direction and/or the horizontal direction, and accordingly, the compensation amount of the infrared image cropping size also includes a cropping size compensation amount in the image longitudinal direction or the image horizontal direction.
Illustratively, the infrared image 60 may be cropped in a manner of asymmetrical double-sided cropping size (i.e., the cropping size on both sides of the image may be different in the longitudinal direction or the horizontal direction of the image) in order to compensate for the deployment position deviation (including the deployment position deviation in the vertical direction and/or the horizontal direction) between the infrared detection component 51 and the augmented reality component 30 (i.e., the optical waveguide lens 32) and avoid the phenomenon of deformation or displacement that may occur when the scene object 40 is presented on the augmented reality component 30.
In general, the deformation or offset of the scene target 40 in the target infrared image 80 displayed on the augmented reality component 30 due to the deployment position deviation may slightly vary with the distance of the scene target 40 relative to the wearable shield, and when the distance of the scene target 40 relative to the wearable shield in the real scene is different from the preset calibration distance between the wearable shield and the preset calibration object used in the calibration process of the wearable shield, the accuracy of the predetermined calibration conversion relationship may be reduced, so that in the case of considering the deployment position deviation between the infrared detection component 51 and the augmented reality component 30, the calibration reference object may be determined in the real scene during the actual use of the wearable shield, and the predetermined calibration conversion relationship may be calibrated in situ based on the difference between the distance of the calibration reference object relative to the wearable shield and the preset calibration distance of the wearable shield to improve the accuracy of the cropping process of the infrared image 60. Through the on-site calibration, when the scene target 40 deviates from the preset calibration distance due to the distance relative to the wearable protector, the deviation of the scene target 40 relative to the real scene in the target infrared image 80 presented on the augmented reality component 30 can be reduced or even eliminated.
That is, in one embodiment, the processing component 500 is specifically configured to, for taking into account a deployment location offset between the infrared detection component 51 and the augmented reality component 30: determining a calibration reference object in a real scene; calibrating a predetermined calibration conversion relationship between an imaging field angle FOV _ inf of the infrared detection component 51 and an optical field angle FOV _ ar of the augmented reality component 30 based on a difference value between the determined distance of the calibration reference object relative to the wearable protective equipment and a preset calibration distance of the wearable protective equipment to obtain a calibration conversion relationship; based on the calibration transformation relationship, the infrared image 60 is processed to obtain the target infrared image 80 matching the optical field angle FOV _ ar of the augmented reality component 30.
Wherein the calibration conversion relationship takes into account the calibration of the infrared image cropping size compensation amount determined based on the deployment position deviation between the infrared detection component 51 and the augmented reality component 30. For example, a corresponding relationship between a difference value between a distance of the calibration reference object relative to the wearable protective equipment and a preset calibration distance of the wearable protective equipment and a calibration amount of the infrared image cutting size compensation amount can be preset, and then after the distance difference value is determined, the calibration amount of the infrared image cutting size compensation amount is calculated based on the corresponding relationship; in addition, the corresponding relationship between the difference between the distance of the calibration reference object from the wearable protective equipment and the preset calibration distance of the wearable protective equipment and the calibration quantity of the cutting size of the infrared image including the compensation quantity of the cutting size can be preset, and after the distance difference is determined, the calibration quantity of the cutting size of the infrared image is calculated based on the corresponding relationship, so that the required calibration conversion relationship is obtained.
In this case, the wearable supporter may further include a device, such as an eyeball tracker, for tracking and detecting the attention position of the target object in the field of view corresponding to the optical field angle FOV _ ar of the augmented reality module 30, and the processing component 500 may determine the scene target located at the detected attention position of the target object as the calibration reference object in the real scene. That is, in the embodiment of the present application, as the attention position or the viewing direction of the target object changes, the calibration reference object may change accordingly.
Alternatively, the processing component 500 may determine a scene object appearing in the infrared image 60 as a calibration reference object in the real scene. If only one scene target appears in the infrared image 60, the scene target is a calibration reference object in the real scene; if a plurality of scene objects appear in the infrared image 60, the processing component 500 may select the scene object according to the image coordinates of the scene object in the infrared image 60, and/or the area of the scene object in the infrared image 60, and/or the object type of the scene object, for example, the processing component 500 may preferentially select the scene object closest to the center of the infrared image 60, and/or preferentially select the scene object with the largest area in the infrared image 60, and/or preferentially select the scene object with a humanoid outline, and the like, which may be configured according to requirements. In this case, the calibration reference object may be changed along with the change of the detection direction of the infrared detection assembly 51.
By calibrating a predetermined calibration conversion relationship between the imaging field angle FOV _ inf of the infrared detection assembly 51 and the optical field angle FOV _ ar of the augmented reality assembly 30 based on the calibration reference object in the real scene, and processing the infrared image 60 by using the calibrated calibration conversion relationship to obtain the target infrared image 80, the posture deformation amount or the posture offset amount of the scene target in the target infrared image 80 presented on the augmented reality module 30 relative to the real scene can be minimized. In particular, when the target infrared image 80 presented on the augmented reality component 30 includes a scene target determined as a calibration reference object, the calibration reference object on the target infrared image 80 may present a pose on the target infrared image 80 relative to a pose in the real scene, and the amount of deformation or offset may approach zero compared to other scene targets. I.e., the augmented reality component 30 can achieve a higher scene rendering effect.
In addition, the processing component 500 may acquire the distance of the calibration reference object relative to the wearable brace in any manner, for example, as shown in fig. 7a and 7b, the wearable brace may further include a distance detection component 53, which may be used to detect the distance of any calibration reference object in the real scene relative to the wearable brace, and send it to the processing component 500.
Regardless of the manner in which processing component 500 performs processing based on the above-described transformation on infrared image 60, scene targets 40 may remain in the same pose in either target infrared image 80 or in infrared image 60.
And, the augmented reality component 30, in presenting the target infrared image 80, causes the scene object 40 present in the target infrared image 80 to correspond to the pose of that scene object 40 in the real scene. The gesture correspondence may at least include that, when the target object observes the real scene through the enhanced display component 30, the scene target 40 existing in the target infrared image 80 coincides with a preset identification position of the scene target 40 in the real scene, where the preset identification position may be, for example, a central position of the scene target or another specific position having a position identification function, and may be specifically preset; the pose correspondence may further include that in the case that the target object observes the real scene through the augmented display component 30, the scene target 40 existing in the target infrared image 80 overlaps with the target envelope region of the scene target 40 in the real scene, wherein the target envelope region target may be a local region on the scene target or an entire region of the scene target, further, the overlap of the target envelope region may be a local region overlap in which an overlap ratio exceeds a preset proportion threshold (for example, 50%), or a region complete overlap in which the overlap ratio approaches 100%, and the complete overlap of the target envelope region is optimal enough to make the scene target 40 existing in the target infrared image 80 coincide with the contour of the scene target 40 in the real scene.
Thus, if there is sufficient visibility in the real scene for the scene object to be resolved from the target object, when the target infrared image 80 is presented on the augmented reality component 30 (i.e., the optical waveguide lens 32), the image contour 41 of the scene object 40 in the target infrared image 80 coincides with the projection contour 42 of the scene object 40 in the augmented reality component 30 (i.e., the optical waveguide lens 32) along the observation optical path of the target object, so as to enhance the recognition capability of the target object for the scene object 40 in the real scene. The reference to the coincidence of the image contour 41 and the projection contour 42 herein means that the image contour 41 and the projection contour 42 approach to a theoretical perfect coincidence, and does not exclude the case where there is a deviation of the contour details.
If visibility in a real scene (e.g., a disaster scene such as a fire) is extremely low, resulting in the scene targets being indistinguishable from the target objects, then when the target infrared image 80 is presented to the augmented reality component 30 (i.e., the optical waveguide lens 32), the scene targets 40 can substantially truly reproduce the corresponding poses of the scene targets 40 in the real scene, so that the target objects can identify the scene targets 40 in the real scene completely depending on the target infrared image 80.
It can be seen that the infrared detection component 51 can be converted into the target infrared image 80 matched with the optical field angle of the augmented reality component by sensing the infrared image 60 output by the temperature of the scene target in the real scene through the processing of the processing component 500, so that the augmented reality component 30 can enhance the correct reproduction of the real scene in the field of view of the target object by presenting the target infrared image 80 on the optical waveguide lens 32, thereby enhancing the visual identification of the real scene, and further helping to alleviate or even eliminate the trouble that the real scene is difficult to be clearly known.
Moreover, because the protective equipment in the embodiment is wearable, both hands of the rescuers can be liberated, the rescuers can hold auxiliary tools such as fire extinguishers and fire hydrants conveniently, and the fire extinguishing and rescuing efficiency is improved.
In particular implementations, the image type of the target infrared image 80 may be a thermal image (e.g., a thermal image known colloquially as a red-iron pattern) or a grayscale image (also known as a grayscale image). When the temperature difference between scene targets in a real scene (such as a disaster scene like a fire scene environment) or the temperature difference of the scene targets relative to the environment is large, the thermal image can more clearly present the scene targets; when the temperature difference between scene objects in a real scene (e.g., a disaster scene such as a fire scene environment) or the temperature difference between a scene object and the environment is small, the definition of the scene object in the thermal image is lower than that of the grayscale image. Therefore, in order to accommodate different situations, in this embodiment, switching of the image type of the target infrared image 80 is allowed.
Fig. 4a and 4b are schematic diagrams of the preferred presence mechanism shown in fig. 3 further incorporating a mode switching function. Referring to fig. 4a and 4b, the processing component 500 may be further configured to generate a switching instruction Ins _ sw of an image presentation type of the target infrared image 80, and send the switching instruction Ins _ sw to the augmented reality component 30, and the augmented reality component 30 may be further configured to present the target infrared image 80 according to the image presentation type corresponding to the switching instruction Ins _ sw, that is, the switching instruction carries the image presentation type of the target infrared image 80 to be presented in the augmented reality component 30, where the image presentation type includes a thermal image type or a grayscale image type.
The switching instruction Ins _ sw generated by the processing component 500 may be triggered according to the temperature information in the real scene detected by the infrared detection component 51, or may be triggered according to an external touch operation.
Exemplarily, in fig. 4a, the processing component 500 may be further configured to generate a switching instruction Ins _ sw according to the temperature information Info _ temp of the real scene detected by the infrared detection component 51. For example, the temperature information Info _ temp may be determined by the pixel values of the infrared image 60, and the processing component 500 may be further configured to determine a temperature difference between scene objects in the real scene or a temperature difference of the scene objects with respect to the environment according to the temperature information Info _ temp of the scene objects in the real scene detected by the infrared detection component 51, and generate the switching instruction Ins _ sw according to the determined temperature difference; further, if the determined temperature difference value is large, a switching instruction carrying thermal image type information is generated, and if the determined temperature difference value is small, a switching instruction carrying gray level image type information is generated.
Alternatively, in fig. 4b, the wearable brace may further comprise a mode switcher 57, the mode switcher 57 being configured to generate a switching request Req _ sw of an image presentation type of the target infrared image 80 in response to an external touch operation (e.g., pressing or toggling or screwing, etc.) and send the switching request Req _ sw to the 500 processing component; and, the processing component 500 may be further configured to generate a switching instruction Ins _ sw according to the switching request Req _ sw. For example, the mode switch 57 may be a mechanical switch or a touch switch, the mode switch 57 may generate a switching request Req _ sw having a specific level state in response to an external touch operation (e.g., pressing or toggling or screwing), and the processing component 500 may recognize the image presentation type requested to be switched as a thermal image type or a grayscale image type according to the specific level state of the switching request Req _ sw. The position of the mode switch 57 is not particularly limited in the embodiment of the present application, and may be mounted on one side of the lens frame, for example.
In addition, the ambient brightness in the real scene also affects the recognition of the scene object in the target infrared image 80. To this end, the wearable brace in this embodiment may further adaptively adjust the backlight brightness of the augmented reality component 30 according to the ambient brightness to render the target infrared image 80 based on the adjusted backlight brightness. The backlight brightness refers to the illumination brightness of the light source when the display driving module 33 (including the light engine) presents the target infrared image 80 to the optical waveguide lens 32. If the ambient brightness is high, the backlight brightness can be increased, so that the target infrared image 80 is not difficult to distinguish due to the high ambient brightness; conversely, if the ambient brightness is relatively low, the backlight brightness may be adjusted low to avoid too high backlight brightness stimulating the vision of the target object.
Fig. 5 is a schematic diagram of the preferred rendering mechanism shown in fig. 3 further incorporating a backlight adjustment function. Referring to fig. 5, the wearable supporter in this embodiment may further include a brightness detection component 55, the brightness detection component 55 may include a brightness sensor, and the brightness detection component 55 may be configured to detect an ambient brightness L _ env in the real scene, particularly, in a field of view corresponding to the optical field angle FOV _ ar of the optical waveguide lens 32, accordingly, the processing component 500 may further be configured to generate a backlight brightness adjustment instruction Ins _ adj according to the ambient brightness L _ env and send the backlight brightness adjustment instruction Ins _ adj to the augmented reality component 30, and the augmented reality component 30 may further be configured to adjust the backlight brightness according to the backlight brightness adjustment instruction Ins _ adj to present the target infrared image 80 based on the adjusted backlight brightness, where the adjustment of the backlight brightness may be specifically performed by the display driving module 33 including the light engine.
For example, the processing component 500 may determine the adjustment target value of the backlight brightness by the augmented reality component 30 according to the backlight brightness adjustment instruction Ins _ adj according to the preset corresponding relationship between the ambient brightness and the backlight brightness.
Thus, the target object is facilitated to see scene objects in the target infrared image 80 presented by the augmented reality component 30 at various ambient intensities, and at the same time, the augmented reality component 30 is prevented from producing visual stimuli to the target object at low ambient intensities.
The switching image type is an optimization performed from the perspective of the presentation form of the target infrared image 80, and the adaptive adjustment of the backlight luminance of the augmented reality component 30 is an optimization performed from the perspective of the presentation environment of the target infrared image 80, which may be performed independently of each other, or may be performed in combination. In addition, the embodiment may further optimize the image content of the target infrared image 80, so as to further improve the recognition of the scene target appearing in the target infrared image 80.
As an optimization mechanism for the image content of the target infrared image 80, the processing component 500 may further be configured to perform local enhancement processing on the contours of scene objects appearing in the target infrared image 80. The local enhancement processing may be performed after obtaining the target infrared image 80, or may be performed on the infrared image 60 directly after obtaining the infrared image 60, and then the target infrared image 80 with locally enhanced contour is obtained through the processing on the infrared image 60. The local enhancement processing may include display enhancement of the outline by using different colors, or display enhancement of the outline by using a thick line (for example, an image stroking algorithm), and the like, wherein the used color type or line type may be preset according to the requirement of the outline presentation or flexibly adjusted during the use of the brace, and the embodiment of the disclosure is not particularly limited.
Fig. 6 is a schematic diagram of the preferred rendering mechanism shown in fig. 3 further incorporating a local enhancement function. Referring to fig. 6, the processing component 500 may be further configured to perform an enhancement process on the outline of the scene object appearing in the target infrared image 80, so that the outline of the scene object is displayed in an enhanced manner in the target infrared image 80 presented by the augmented reality component 30 (i.e., the optical waveguide lens 32).
In fig. 6, the scene object 40 of the human body contour is merely taken as an example to show the enhanced display effect of the human body contour, and in practical applications, the scene object capable of performing contour enhancement processing is not limited to a human body, but may also include objects such as a gas tank and furniture, and may also include building structures such as a passage opening and a wall. For example, by performing target edge detection on the infrared image 60 or the target infrared image 80, the contours of all scene targets appearing in the infrared image 60 or the target infrared image 80 can be obtained synchronously, but not limited to detection and contour recognition for a certain target type, after the contour of a detected scene target is obtained, contour enhancement processing may be performed on the scene target belonging to the contour type according to a preset contour type to be enhanced, and naturally, enhancement processing may also be performed on all contours determined by edge detection. The type of the contour to be enhanced may be preset according to the type of a real scene, for example, for a disaster scene such as a fire, a smoke environment, etc., and the type of the contour to be enhanced may include a human body contour, a road surface obstacle contour, a vehicle contour, a furniture or appliance contour, a building structure-related contour, etc.
Through the contour enhancement process, the target object can more quickly and accurately identify the scene target in the target infrared image 80. For example, in disaster scenes such as a fire scene environment and a smoke environment, the target object can accurately identify the posture actions of people appearing in the real scene, so as to identify posture actions which are beneficial to improving the rescue efficiency, such as the body language for asking for help made by people to be rescued or the body language for prompting made by rescue teammates; in addition, the arrangement position of objects in a real scene, the influence of a building structure on the advancing and the like can be accurately identified, so that dangerous targets such as gas tanks can be avoided, or paths can be accurately found.
The contour enhancement of the target infrared image 80 may be performed independently of switching the image presentation type and adaptively adjusting the backlight brightness of the augmented reality component 30, or may be performed in combination with at least one of the same. For the case where the contour enhancement is implemented in combination with the switching of the image type, the contour color for the contour enhancement may be changed with the switching of the image presentation type, and may also be in accordance with a color corresponding to the image presentation type, for example, when the image presentation type of the target infrared image 80 is determined to be a thermal image (e.g., a thermal image of colloquially called a red-iron mode), the contour color may be selected as a first color (e.g., a cool tone color) having a contrast effect with the main tone of the thermal image; when the image type of the target infrared image 80 is determined to be a gray-scale image (also referred to as a gray-scale image) type, the contour color may be a second color having a contrast effect with gray (e.g., a highlight warm-tone color).
Fig. 7a and 7b are schematic diagrams of the preferred rendering mechanism shown in fig. 3 further incorporating a distance visualization function. Referring to fig. 7a and 7b, the wearable brace in this embodiment may further include a distance detection component 53, the distance detection component 53 may include, but is not limited to, a laser ranging detector or a radar ranging detector, and the distance detection component 53 is configured to detect a distance D40 of the scene target 40 in the real scene relative to the wearable brace.
The distance D40 detected by the distance detection component 53 may be used in addition to calibrating the translation relationship in the manner previously described, and/or may be correlated for presentation with the target infrared image 80; accordingly, the processing component 500 may be further configured to perform an association process on the distance D40 detected by the distance detection component 53 and the scene object 40, and send the association process result to the augmented reality component 30; the augmented reality component 30 is further configured to: and performing associated presentation on the distances and the scene targets 40 based on the associated processing result. The association processing result may include a presentation instruction for presenting the distance and the scene target 40 in an associated manner, or the association processing result may include distance data corresponding to the scene target 40, or the association processing result may include a target infrared image on which distance information corresponding to the scene target 40 is superimposed, or the like.
For example, in fig. 7a, the processing component 500 may be specifically configured to generate a distance information presentation instruction Ins _ dis (which may carry the distance D40 detected by the distance detection component 53) based on the distance D40 detected by the distance detection component 53, and send the distance information presentation instruction Ins _ dis as an associated presentation result to the augmented reality component 30; and, the augmented reality component 30 is further configured to present the scene target 40 in association with the distance D40 detected by the distance detection component 53 based on the distance information presentation instruction Ins _ dis. For example, the display driving module 33 of the augmented reality component 30 may display the target infrared image 80 and the distance D40 detected by the distance detection component 53 on the optical waveguide lens 32 in an overlapping manner according to the distance information presentation instruction Ins _ dis. Alternatively, the processing component 500 may also use distance data representing the distance D40 instead of the distance information presentation instruction Ins _ dis.
For example, in fig. 7b, the processing component 500 may be specifically configured to add the distance D40 detected by the distance detection component 53 to the target infrared image 80, and send the target infrared image 80 with the added distance D40 to the augmented reality component 30 as the associated presentation result. It will be appreciated that the contour enhancement effect of the scene object is also presented in fig. 7b, in order to more clearly show that the superimposition of the distance D40 in the object infrared image 80 is associated with the scene object, and that the associated presentation of the defined distance must be implemented in combination with the contour enhancement.
The distance D40 presented in association with the scene object 40 in the target infrared image 80 may have the form of a visual digital graphic, or may be a picture form containing digital information content, and its display position in the target infrared image 80 is not particularly limited, for example, the distance D40 presented in association may be located within the outline of the scene object 40 or presented at a position at a specific distance from the scene object 40 in order to discriminate the distance of different scene objects detected by the distance detection component 53 when there are multiple scene objects in the target infrared image 80. Wherein, the specific value of the specific distance can be reasonably set in advance according to the display requirement. When there are multiple scene objects in a real scene (e.g., a disaster scene such as a fire scene environment), the range detection component 53 may detect the range of each scene object and may distinguish different detected ranges according to the location identification of each scene object, and the processing component 500 may have the capability to distinguish the ranges of different scene objects.
Based on the correlation presentation of the distance detected by the distance detection component 53 and the scene target in the target infrared image 80, the spatial cognition of the target object to the scene target can be more accurate, the rescue efficiency in a disaster scene can be improved, the target object can be reminded to control pace at the same time, and the risk of injury caused by mistaken collision with other moving objects is reduced.
In addition to reproducing real scenes (e.g., disaster scenes such as fire scene environments) at the augmented reality component 30, the wearable brace in this embodiment can also support trajectory tracking and trajectory navigation.
Fig. 8 is a schematic diagram of a position reporting mechanism of the wearable supporter shown in fig. 1. Referring to fig. 8, in this embodiment, the wearable protector may further include a position sensing component 73 and a wireless communication component 71, and the specific installation position is not particularly limited. The position sensing assembly 73 may include a combination of various sensors such as an accelerometer, a gyroscope, a magnetic induction meter, etc., for example, the position sensing assembly 73 may include a nine-axis sensing assembly including a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetic induction meter, which helps to improve the accuracy of the positioning calculation. The position sensing component 73 is used for sensing spatial position information of the wearable brace (i.e. the target object 90 wearing the brace), and under the driving control of the processing component 500, the wireless communication component 71 is used for remotely transmitting the spatial position information, so that the remote server 900 (e.g. a command center server) can maintain a movement track of the wearable brace (i.e. a movement track of the target object 90 wearing the brace) by using the spatial position information.
The remote server 900 can obtain the real-time position of the wearable protector through real-time sampling and fusion operation of the spatial position information, and can obtain the movement track of the wearable protector (i.e. the target object 90 wearing the protector) through curve fitting to the real-time position, for example, the movement track of the target object 90 reaching the position of the person to be searched (i.e. as an example of the scene target 40) from the entrance of the fire scene environment is shown in fig. 8.
Moreover, the remote server 900 (e.g., command center server) can generate the visual navigation instructions based on the currently maintained movement trajectory, and the remote server 900 can also present the visual navigation instructions generated based on the movement trajectory in a subsequent wearable brace via teletransmission, wherein the starting movement time of the subsequent wearable brace is later than the starting movement time of the current wearable brace (i.e., the wearable brace having maintained the movement trajectory). Also, the visual navigation indication may include a navigation track generated based on the maintained movement track, or may also include a navigation guidance identifier generated based on the maintained movement track, for example, indicating forward, backward, left turn, or right turn.
For example, for some disaster scenarios, such as fire, it may be necessary for the target object to wear the wearable protective equipment individually, in this case, each wearable protective equipment may be assigned with a corresponding device identifier in advance, and the spatial location information transmitted by each wearable protective equipment to the remote server through the wireless communication component 71 may carry the device identifier of the present protective equipment, so that the remote server partially carries the target object of different wearable protective equipment. In the case of performing rescue work in a group action manner, in order to find a target object entering a disaster scene with an early departure time as soon as possible or improve rescue efficiency, the remote server 900 may maintain a movement track of the target object entering the disaster scene with an early departure time (i.e., a movement track of a wearable protector), generate a visual navigation instruction based on the maintained movement track, and remotely transmit the visual navigation instruction to a wearable protector worn by the target object entering the disaster scene with a late departure time (i.e., a rear wearable protector), so that the augmented reality component of the rear wearable protector can present the visual navigation instruction to guide the target object worn by the rear wearable protector to enter the disaster scene, and the target object entering the disaster scene with the highest disaster efficiency along the movement track of the target object entering the disaster scene with the early departure time or efficiently find the target object entering the disaster scene before so as to complete cooperative rescue work in the shortest time, thereby preventing the target object of individual combat from getting lost in the disaster scene. Therefore, the remote server 900 can implement effective overall planning on the rescue work by acquiring the spatial position information of the target object in the disaster scene and the movement track determined by the spatial position information.
The wearable pad augmented reality component 30 can further be used to present visual navigation instructions that direct the direction of travel. If the augmented reality component 30 of the current wearable brace is used as the rear wearable brace, the presented visual navigation instruction can be generated based on the movement track of the target object with an early departure time into the disaster scene. It is to be understood that the visual navigation instructions presented by the wearable shield augmented reality assembly 30 can be generated based on a map of the real scene or based on other means (e.g., drone-based high altitude navigation).
Fig. 9 is a schematic diagram of a trajectory navigation mechanism of the wearable protector shown in fig. 1. Referring to fig. 9, in this embodiment, the wireless communication component 71 may be further configured to obtain a visual navigation indication 89 through remote transmission, the visual navigation indication 89 may be determined according to a movement track maintained by the remote server and spatial position information of the wearable brace (i.e., the target object 90 wearing the brace), and accordingly, the augmented reality component 30 may be further configured to present a visual navigation indication 89, such as a navigation guidance identifier representing a current direction of travel, or a current navigation track, etc.
The visual navigation indication 89 can be provided by the remote server 900, and by monitoring the spatial position information of the wearable protector (i.e. the target object 90 wearing the protector) in real time, the visual navigation indication 89 suitable for each key inflection point can be provided when the wearable protector (i.e. the target object 90 wearing the protector) is at the key inflection point in the navigation track. In fig. 9, a visual navigation indication 89 is used to guide the target object 90 and the searched person (as an example of the scene target 40) to escape along the original path of the moving track shown in fig. 8. In actual practice, however, the visual navigational directions 89 may be used to direct any path of travel.
Regarding the implementation of the presentation of the visual navigation indication 89, the visual navigation indication 89 may be presented in a manner of being displayed in a non-simultaneous manner on the target infrared image 80, for example, the visual navigation indication 89 may interrupt the presentation of the target infrared image 80 on the augmented reality component 30, and then individually present for a preset time, and a value may be set in advance reasonably, or the visual navigation indication 89 may also be presented in a stacked manner on the augmented reality component 30 together with the target infrared image 80, that is, the augmented reality component 30 presents the visual navigation indication 89 and the target infrared image 80 at the same time.
Therefore, by further introducing a track navigation mechanism, overall planning of rescue work can be facilitated, and rescue efficiency is improved. The wearable protective gear can also be provided with a voice communication component, and the voice interaction with the remote server 900 through the wireless communication component 71 can be helpful for commanding the target object to be rescued accurately and informing the target object of emergency evacuation in time when danger possibly occurs. For example, the commander can master the current position and the movement trend of the target object in real time through the real-time spatial position information and the movement track determined by the real-time spatial position information, and based on auxiliary information additionally acquired by other ways outside the disaster environment, the commander can pertinently guide the target object in the disaster environment to accurately carry out rescue work through voice instead of blindly commanding.
In addition, the wearable shield in this embodiment may also implement a respiratory protection mechanism to provide an alert when ambient gases are adverse to, or even prevent, the target subject from breathing.
Fig. 10 is a schematic diagram of the respiratory protection mechanism of the wearable suit shown in fig. 1. Referring to fig. 10, the wearable shield in this embodiment may further include a gas detection component 75, the gas detection component 75 may be configured to detect a gas composition parameter in an environment where the wearable shield is located, and the processing component 500 may be further configured to generate the breathing alert information 87 according to the gas composition parameter detected by the gas detection component 75, and transmit the generated breathing alert information 87 to the augmented reality component 30; the augmented reality component 30 is further configured to present a breathing alert 87, where the breathing alert 87 includes at least information content indicating that hazardous gas exists in the environment of the wearable shield, for example, carbon monoxide content reaches a certain value, oxygen content is too low, flammable and explosive components are contained in the gas, and the breathing alert 87 may further include an alert symbol such as an exclamation point.
For example, the processing component 500 may detect a gas composition parameter detected by the gas detection component 75, and the processing component 500 may generate the respiratory alert message 87 in response to detecting a gas composition parameter having a preset hazard ratio (e.g., a toxic gas or carbon dioxide concentration in the gas is excessive); the breathing alert information 87 may interrupt (e.g., for a predetermined duration) the presentation of the target infrared image 80 on the augmented reality component 30, or the breathing alert information 87 may be presented in superposition with the target infrared image 80 on the augmented reality component 30.
The processing assembly 500 can also remotely transmit the gas composition parameters with dangerous mixture ratio to the remote server through the wireless communication assembly 71, so that the remote server can make the next decision and instruction for the rescue work, and prevent the occurrence of greater disasters caused by wrong instructions due to unknown field environments.
Fig. 11 is a perspective view of an example configuration of the wearable brace shown in fig. 1. Fig. 12 is a schematic view of an exploded state of the example structure shown in fig. 11. To more intuitively understand the physical form of the wearable gear, an example structure is provided in fig. 11 and 12, it being understood that the physical form of the wearable gear is not limited to this example structure.
Referring to fig. 11 in conjunction with fig. 12, an example wearable shield structure can include a mask 10, the mask 10 having a respiratory protection component 20. Preferably, the respiratory protection component 20 may include a respiratory valve having an air path switch, the air path switch may selectively connect the respiratory valve to the atmosphere in the environment where the wearable protective equipment is located or to the gas storage cylinder, for example, the air path switch may connect the respiratory valve to the atmosphere in the environment where the wearable protective equipment is located in a closed state, and the air path switch may connect the respiratory valve to the gas storage cylinder in an open state; the reverse is also possible. In this case, the breathing warning message 87 shown in fig. 10 may further include a visual prompt message for prompting the communication between the breather valve and the gas cylinder, so as to prompt the target object to manually perform the switching of the breather valve. The gas storage bottle can be carried by a target object, and the selectable communication of the gas path change-over switch can be triggered automatically or manually.
Alternatively, the switching of the breathing valve may also be triggered automatically by the processing assembly 500, i.e. the processing assembly 500 may be further configured to: generating a breathing switching instruction according to the gas component parameters detected by the gas detection component 75, and sending the breathing switching instruction to a breathing valve; the breather valve is further configured to: and determining a communication state according to the breathing switching instruction, wherein the communication state of the breathing valve comprises a state of communicating with the atmosphere in the environment where the wearable protective equipment is located or a state of communicating with a gas storage bottle. Therefore, when the environmental gas content threatens the health and even the life safety of the target object, the health and the safety of the target object can be guaranteed by switching the communicated object of the breather valve.
The processing component 500 is further configured to: generating a gas remaining amount prompt message according to the gas remaining amount (for example, oxygen storage amount) in the gas bomb, and sending the message to the augmented reality component 30; the augmented reality component is further to: and presenting the prompting information of the gas residual quantity. For example, the processing component 500 may determine the current gas remaining amount of the gas cylinder from the current internal pressure of the gas cylinder acquired by the gas pressure monitoring component, and generate the gas remaining amount prompt information, and if the current gas remaining amount of the gas cylinder is lower than the preset storage threshold, the processing component 500 may further generate warning prompt information, for example, the warning prompt information is presented in the augmented reality component 30, or the warning prompt information is fed back to the target object in a form of sound, so that the target object exits the disaster scene in time.
The face mask 10 has a light transmissive region at least over the respiratory protection component 20, and the augmented reality component 30 may be mounted within the face mask 10 in a position that covers the light transmissive region. An example configuration of a wearable brace may include a first pod 50, the first pod 50 being mounted to the mask 10 outside of a first side thereof, wherein the infrared detection component 51, the distance detection component 53, the brightness detection component 55, and the mode switch 57 may be mounted to the front of the first pod 50. An example configuration of a wearable suit may also include a second exoskeleton box 70, the second exoskeleton box 70 being mounted in a suspended manner on the outside of a second side of the mask 10 opposite the first side, wherein the wireless communication component 71 is mounted on top of the second exoskeleton box 70, the position sensing component 73 and the gas detection component 75 may be housed inside the second exoskeleton box 70, and wherein the second exoskeleton box 70 has a ventilation gap 750 on its front side, the ventilation gap 750 being disposed adjacent the respiratory protection component 20 and in communication with the sensing end of the gas detection component 75.
The processing unit 500 may be accommodated inside the first hanging box 50 and electrically connected to the infrared detection unit 51, the distance detection unit 53, the brightness detection unit 55, and the mode switching switch 57, and the processing unit 500 accommodated in the first hanging box 50 may also communicate with the wireless communication unit 71, the position sensing unit 73, and the gas detection unit 75 mounted in the second hanging box 70 across the cavity. Alternatively, the processing unit 500 may be accommodated inside the second hanging box 70 and electrically connected to the wireless communication unit 71, the position sensing unit 73, and the gas detection unit 75 mounted in the second hanging box 70, and the processing unit accommodated in the second hanging box 70 may communicate with the infrared detection unit 51, the distance detection unit 53, the brightness detection unit 55, and the mode switching switch 57 mounted in the first hanging box 50 across the cavity.
As is clear from fig. 12, the augmented reality element 30, the infrared detection element 51, the distance detection element 53, the brightness detection element 55, the mode switch 57, the wireless communication element 71, the position sensing element 73, and the gas detection element 75 may be separate from the mask 10 with the respiratory protection element 20, and therefore, the wearable supporter in this embodiment may be in the form of an accessory that does not include the mask 10 but can be detachably attached to the mask 10.
Fig. 13 is an exemplary flow diagram of a scene presentation method for a wearable brace in another embodiment. Referring to fig. 13, an exemplary flow of the scene presenting method may be adapted to be executed by a processing component of the wearable supporter in the foregoing embodiment, and reference may be made to the explanation in the foregoing embodiment for what is not explained in detail in the following embodiment. The scene presenting method may include:
s1310: the method comprises the steps of obtaining an infrared image of a real scene output by an infrared detection assembly, wherein a visual field corresponding to an imaging visual field angle of the infrared detection assembly is overlapped with a visual field corresponding to an optical visual field angle of an augmented reality assembly in the same direction, the augmented reality assembly is used for a target object to observe the real scene, and the optical visual field angle is the visual field angle of the target object through the augmented reality assembly to observe the real scene.
S1330: and processing the infrared image based on the conversion between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly to obtain a target infrared image matched with the optical field angle of the augmented reality assembly.
In an alternative embodiment, this step may specifically include: and processing the infrared image based on a conversion relation between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly to obtain a target infrared image matched with the optical field angle of the augmented reality assembly, wherein the conversion relation is at least used for defining the infrared image cutting size corresponding to the difference between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly.
In another alternative embodiment, the step may specifically include: determining a calibration reference object in a real scene; calibrating a predetermined calibration conversion relation between an imaging angle of view of the infrared detection assembly and an optical angle of view of the augmented reality assembly based on a difference value between the determined distance of the calibration reference object relative to the wearable protective equipment and a preset calibration distance of the wearable protective equipment to obtain a calibration conversion relation; and processing the infrared image based on the calibration conversion relation to obtain a target infrared image matched with the optical field angle of the augmented reality component.
In either way, the conversion between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component can be realized by cropping the infrared image. Specifically, the edge portions of the infrared image may be cropped so that the target infrared image has an overall image size that matches the optical field angle of the augmented reality component, and the scene objects in the target infrared image are not improperly distorted or displaced due to image compression.
S1350: and sending the target infrared image to the augmented reality assembly so that the augmented reality assembly presents the target infrared image, and enabling the scene target in the target infrared image to correspond to the pose of the scene target in the real scene.
Based on the above flow, if the wearable protector has an augmented reality component for allowing the target object to observe the real scene and an infrared detection component for outputting an infrared image of the real scene, the infrared image output by the infrared detection component by sensing the temperature of the scene target in the real scene can be converted into a target infrared image matched with the optical field angle of the augmented reality component, so that the augmented reality component can enhance the correct reproduction of the real scene in the field of view of the target object by presenting the converted target infrared image, thereby enhancing the visual recognition degree of the real scene (for example, disaster scenes such as fire and the like), and further helping to alleviate or even eliminate the trouble that the real scene is difficult to be clearly known.
During the execution of the flow shown in fig. 13, in order to ensure that the target infrared image can be clearly presented to the maximum extent when the environmental temperature difference is in different degrees, the image type of the target infrared image may be switchably determined as a thermal image or a grayscale image according to the environmental brightness. Specifically, the scene presenting method in this embodiment may further include: and generating a switching instruction of the image presentation type of the target infrared image, and sending the switching instruction to the augmented reality assembly, wherein the augmented reality assembly is further used for presenting the target infrared image according to the image presentation type corresponding to the switching instruction, and the image presentation type comprises a thermal image type or a gray image type.
The manner of generating the switching instruction of the image presentation type of the target infrared image may specifically include:
generating a switching instruction according to the temperature information of the real scene, for example, determining a temperature difference between scene targets in the real scene according to the temperature information of the scene targets in the real scene, and generating a switching instruction according to the temperature difference between the scene targets, and the temperature information of the real scene may be acquired from the infrared image;
or generating a switching instruction according to the switching request of the image presentation type of the target infrared image; the switching request is generated by the mode switch in response to an external touch operation.
During the execution of the flow shown in fig. 13, in order to avoid the rendering of the target infrared image being not easily distinguished due to too bright environment or generating visual stimulation to the target object due to too dark environment, the backlight brightness of the augmented reality component may be determined according to the detected ambient brightness of the real scene (especially, in the field of view corresponding to the optical field angle of the augmented reality component). Specifically, the scene presenting method in this embodiment may further include: the method comprises the steps of obtaining ambient brightness in a real scene (especially in a visual field corresponding to an optical field angle of an augmented reality assembly), generating a backlight brightness adjusting instruction according to the obtained ambient brightness, and sending the backlight brightness adjusting instruction to the augmented reality assembly, wherein the augmented reality assembly is further used for adjusting the backlight brightness according to the backlight brightness adjusting instruction, so that a target infrared image is presented based on the adjusted backlight brightness. For example, the ambient brightness in the real scene may be obtained from a brightness detection component further included in the wearable gear.
In order to highlight the scene target in the target infrared image, the scene presenting method in this embodiment may further include: and performing enhancement processing on the outline of the scene target appearing in the target infrared image, so that the outline of the scene target is enhanced and displayed in the target infrared image presented by the augmented reality component.
If the distance between the scene object and the wearable supporter can be detected in real time during the operation process shown in fig. 13, the scene presenting method in this embodiment may further include: the method comprises the steps of obtaining the distance between a scene target detected in a real scene and a wearable protective tool, carrying out association processing on the detected distance and the scene target, and sending an association processing result to an augmented reality assembly, wherein the augmented reality assembly can be further used for carrying out association presentation on the distance detected in the real scene and the scene target based on the association processing result. For example, the distance of the scene object relative to the wearable gear can be obtained from a distance detection component further included with the wearable gear.
For example, a distance information presentation instruction or distance data representing the distance is generated based on the acquired distance, and the distance information presentation instruction or the distance data is sent to the augmented reality component as a correlation processing result, so that the augmented reality component further performs correlation presentation on the scene target and the distance based on the distance information presentation instruction or the distance data; or, the acquired distance is added to the target infrared image, and the target infrared image to which the distance is added is sent to the augmented reality component as a result of the association processing when S1350 is executed.
Optionally, the scene presenting method in this embodiment may also implement real-time reporting of the position of the wearable protector. Specifically, the scene presenting method in this embodiment may further include: the method comprises the steps of acquiring spatial position information of the wearable protective tool, and remotely transmitting the acquired spatial position information (for example, remotely transmitting the acquired spatial position information through a wireless communication component further included in the wearable protective tool) so that a remote server can maintain a movement track of the wearable protective tool by using the spatial position information, wherein the remote server is also used for presenting a visual navigation instruction generated based on the movement track in a rear wearable protective tool, and the starting movement time of the rear wearable protective tool is later than that of the wearable protective tool.
Optionally, the scene presenting method in this embodiment may further implement a visual navigation indication. Specifically, the scene presenting method in this embodiment may further include: presenting a visual navigation indication directing the direction of travel. The visual navigation instruction can be generated based on a movement track maintained by a remote server side, or can be generated based on a map of a disaster scene or other ways; the visual navigation instruction may be obtained in real time by monitoring the remote transmission, and the visual navigation instruction may interrupt (e.g., interrupt for a preset time) the presentation of the target infrared image on the augmented reality component, or may be presented in superposition with the target infrared image on the augmented reality component.
In addition, during the execution of the flow shown in fig. 13, the method for presenting a scene in this embodiment may further include: the method comprises the steps of generating a breathing switching instruction according to gas composition parameters in the environment where the wearable protective equipment is located, and sending the breathing switching instruction to a breathing valve, wherein the breathing valve is used for determining a communication state according to the breathing switching instruction, and the communication state comprises a state communicated with the atmosphere in the environment where the wearable protective equipment is located or a state communicated with a gas storage bottle. If the wearable shield further comprises a gas detection component for detecting a gas composition parameter in an environment in which the wearable shield is located, the gas composition parameter can be obtained from the gas detection component. On this basis, the scene rendering method in this embodiment may further include: generating breathing warning information according to the gas composition parameters, and sending the breathing warning information to an augmented reality assembly, wherein the augmented reality assembly is used for presenting the breathing warning information, and the breathing warning information comprises information content used for representing that dangerous gas exists in the environment where the wearable protective equipment is located; alternatively, the scene presenting method in this embodiment may further include: and generating gas surplus prompt information according to the gas surplus in the gas storage bottle, and sending the gas surplus prompt information to the augmented reality component, wherein the augmented reality component is used for presenting the gas surplus prompt information.
The technical solution provided by the method embodiment belongs to the same inventive concept as the technical solution provided by the product embodiment, and the contents not explained in detail in the method embodiment can refer to the explanations in the product embodiment, and can achieve the corresponding beneficial effects described in the product embodiment.
In another embodiment, a non-transitory computer-readable storage medium is also provided that stores instructions including instructions for causing a processor (or processing component) to perform any of the scene rendering methods provided in the above embodiments that can be used with wearable gear, and for a description of the methods, reference can be made to the description in the above embodiments. The non-transitory computer readable storage medium may include: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (12)

1. A wearable brace, comprising:
an augmented reality component for a target object to observe a real scene;
the infrared detection component is used for outputting an infrared image of the real scene, and a visual field corresponding to an imaging visual field angle of the infrared detection component and a visual field corresponding to an optical visual field angle of the augmented reality component are overlapped in the same direction, wherein the optical visual field angle is the visual field angle of the target object for observing the real scene through the augmented reality component;
a processing component to:
calibrating a predetermined calibration conversion relationship between the imaging field of view of the infrared detection assembly and the optical field of view of the augmented reality assembly based on a difference between a distance of a calibration reference object relative to a wearable protective equipment determined in the real scene and a preset calibration distance, wherein the calibration conversion relationship is at least used for defining an infrared image cropping size corresponding to a difference between the imaging field of view of the infrared detection assembly and the optical field of view of the augmented reality assembly;
processing the infrared image based on a calibration conversion relation obtained by calibrating the calibration conversion relation to obtain a target infrared image matched with the optical field angle of the augmented reality assembly, and sending the target infrared image to the augmented reality assembly;
the augmented reality component is further configured to present the target infrared image and enable a scene target existing in the target infrared image to correspond to a pose of the scene target in the real scene.
2. The wearable pad of claim 1,
the processing component is further to:
performing enhancement processing on the outline of the scene target appearing in the target infrared image so that the outline of the scene target is displayed in an enhanced manner in the target infrared image presented by the augmented reality component;
or,
the wearable protective gear further comprises a brightness detection component, wherein the brightness detection component is used for detecting the ambient brightness in the real scene;
the processing component is further to: generating a backlight brightness adjusting instruction according to the environment brightness, and sending the backlight brightness adjusting instruction to the augmented reality component;
the augmented reality component is further to: adjusting backlight brightness according to the backlight brightness adjusting instruction to present the target infrared image based on the adjusted backlight brightness;
or,
the wearable protective gear further comprises a position sensing component and a wireless communication component;
wherein, the position sensing component is used for sensing the space position information of the wearable protective gear;
the wireless communication component is configured to remotely transmit the spatial position information, so that a remote server maintains a movement trajectory of the wearable pad using the spatial position information, and the remote server is further configured to present a visual navigation indication generated based on the movement trajectory in a rear wearable pad, wherein a start movement time of the rear wearable pad is later than a current start movement time of the wearable pad;
or,
the wearable gear further comprises a distance detection component for detecting a distance of a scene target in the real scene relative to the wearable gear;
the processing component is further to: the distance detected by the distance detection component is associated with the scene target, and an associated processing result is sent to the augmented reality component;
the augmented reality component is further to: and performing association presentation on the distance and the scene target based on the association processing result.
3. A wearable protector according to claim 1 wherein,
the processing component is further to: generating a switching instruction of the image presentation type of the target infrared image, and sending the switching instruction to the augmented reality component;
the augmented reality component is further to: and presenting the target infrared image according to an image presentation type corresponding to the switching instruction, wherein the image presentation type comprises a thermal image type or a gray image type.
4. The wearable gear according to claim 3,
the processing component is further to: generating the switching instruction according to the temperature information of the real scene detected by the infrared detection assembly;
or
The wearable protective equipment further comprises a mode switch, and the mode switch is used for responding to external touch operation, generating a switching request of the image presentation type of the target infrared image and sending the switching request to the processing component;
the processing component is further to: and generating the switching instruction according to the switching request.
5. The wearable pad of claim 1,
the wearable protector further comprises a gas detection component, wherein the gas detection component is used for detecting gas composition parameters in the environment where the wearable protector is located;
the processing component is further to: generating a breathing switching instruction according to the gas composition parameters, and sending the breathing switching instruction to a breathing valve;
the breather valve is used for: and determining a communication state according to the breathing switching instruction, wherein the communication state comprises a state of communicating with the atmosphere in the environment where the wearable protective equipment is located or a state of communicating with a gas storage bottle.
6. The wearable pad of claim 5,
the processing component is further to: generating breathing warning information according to the gas composition parameters, and sending the breathing warning information to the augmented reality component;
the augmented reality component is further to: presenting the breathing warning information, wherein the breathing warning information comprises information content used for representing that dangerous gas exists in the environment where the wearable protective equipment is located;
or,
the processing component is further to: generating gas residual amount prompt information according to the gas residual amount in the gas storage bottle, and sending the gas residual amount prompt information to the augmented reality component;
the augmented reality component is further to: and presenting the prompting information of the gas residual quantity.
7. A scene presenting method for a wearable protector, comprising:
acquiring an infrared image of a real scene output by an infrared detection component, wherein a visual field corresponding to an imaging visual field angle of the infrared detection component and a visual field corresponding to an optical visual field angle of an augmented reality component are overlapped in the same direction, the augmented reality component is used for a target object to observe the real scene, and the optical visual field angle is the visual field angle of the target object for observing the real scene through the augmented reality component;
calibrating a predetermined calibration conversion relationship between the imaging field of view of the infrared detection component and the optical field of view of the augmented reality component based on a difference between a distance of a calibration reference object relative to a wearable protective equipment determined in the real scene and a preset calibration distance, wherein the calibration conversion relationship is at least used for defining an infrared image cropping size corresponding to a difference between the imaging field of view of the infrared detection component and the optical field of view of the augmented reality component;
processing the infrared image based on a calibration conversion relation obtained by calibrating the calibration conversion relation to obtain a target infrared image matched with the optical field angle of the augmented reality component;
and sending the target infrared image to the augmented reality assembly so that the augmented reality assembly presents the target infrared image, wherein the augmented reality assembly is also used for enabling a scene target existing in the target infrared image to correspond to the pose of the scene target in the real scene.
8. The scene rendering method of claim 7,
further comprising: performing enhancement processing on the outline of the scene target appearing in the target infrared image so that the outline of the scene target is displayed in an enhanced manner in the target infrared image presented by the augmented reality component;
or,
further comprising: generating a backlight brightness adjusting instruction according to the environment brightness in the real scene and the environment brightness, and sending the backlight brightness adjusting instruction to the augmented reality component; the augmented reality component is used for adjusting backlight brightness according to the backlight brightness adjusting instruction and presenting the target infrared image based on the adjusted backlight brightness;
or,
further comprising: remotely transmitting spatial position information of the wearable protective equipment, so that a remote server can maintain a movement track of the wearable protective equipment by using the spatial position information, and the remote server is further used for sending a visual navigation instruction generated based on the movement track to a rear wearable protective equipment, wherein the starting movement time of the rear wearable protective equipment is later than the current starting movement time of the wearable protective equipment;
or,
further comprising: associating the distance between a scene target in the real scene and the wearable protective tool with the scene target, and sending an association processing result to the augmented reality component; and the augmented reality component is used for performing associated presentation on the distance and the scene target based on the associated processing result.
9. The scene rendering method of claim 7, further comprising:
generating a switching instruction of the image presentation type of the target infrared image, and sending the switching instruction to the augmented reality component; and the augmented reality component is used for presenting the target infrared image according to the image presentation type corresponding to the switching instruction, wherein the image presentation type comprises a thermal image type or a gray image type.
10. The scene presenting method according to claim 9, wherein the instruction for switching the image presentation type of the target infrared image includes:
generating the switching instruction according to the temperature information of the real scene; or,
generating the switching instruction according to the switching request of the image presentation type of the target infrared image; the switching request is generated by the mode switch in response to an external touch operation.
11. The scene rendering method of claim 7, further comprising:
generating a breathing switching instruction according to the gas composition parameters in the environment where the wearable protective equipment is located, and sending the breathing switching instruction to a breathing valve; the breather valve is used for determining a communication state according to the breathing switching instruction, wherein the communication state comprises a state communicated with the atmosphere in the environment where the wearable protective equipment is located or a state communicated with a gas storage bottle.
12. Scene rendering method according to claim 11,
further comprising: generating breathing warning information according to the gas composition parameters, and sending the breathing warning information to the augmented reality component; the augmented reality component is used for presenting the breathing warning information, wherein the breathing warning information comprises information content used for representing that dangerous gas exists in the environment where the wearable protective equipment is located;
or,
further comprising: generating gas residual amount prompt information according to the gas residual amount in the gas storage bottle, and sending the gas residual amount prompt information to the augmented reality component; the augmented reality component is used for presenting the gas residual amount prompt message.
CN202110712665.5A 2021-06-25 2021-06-25 Wearable protective tool and scene presenting method for wearable protective tool Active CN113409218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110712665.5A CN113409218B (en) 2021-06-25 2021-06-25 Wearable protective tool and scene presenting method for wearable protective tool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110712665.5A CN113409218B (en) 2021-06-25 2021-06-25 Wearable protective tool and scene presenting method for wearable protective tool

Publications (2)

Publication Number Publication Date
CN113409218A CN113409218A (en) 2021-09-17
CN113409218B true CN113409218B (en) 2023-02-28

Family

ID=77679561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110712665.5A Active CN113409218B (en) 2021-06-25 2021-06-25 Wearable protective tool and scene presenting method for wearable protective tool

Country Status (1)

Country Link
CN (1) CN113409218B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082646B (en) * 2022-06-30 2024-06-04 华中科技大学 VR (virtual reality) glasses lens pose correction method based on symmetrical point allowance deviation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104364800A (en) * 2012-03-30 2015-02-18 前视红外系统股份公司 Facilitating analysis and interpretation of associated visible light and infrared (IR) image information
CN111475130A (en) * 2020-03-27 2020-07-31 深圳光启超材料技术有限公司 Display method of track information, head-mounted device, storage medium and electronic device
CN111539996A (en) * 2020-03-25 2020-08-14 深圳奇迹智慧网络有限公司 Thermal imaging processing method, thermal imaging processing device, computer equipment and storage medium
CN211603731U (en) * 2020-04-03 2020-09-29 江苏集萃有机光电技术研究所有限公司 Virtual reality or augmented reality display system and wearable equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6925403B2 (en) * 2003-09-15 2005-08-02 Eaton Corporation Method and system for calibrating a sensor
US10969584B2 (en) * 2017-08-04 2021-04-06 Mentor Acquisition One, Llc Image expansion optic for head-worn computer

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104364800A (en) * 2012-03-30 2015-02-18 前视红外系统股份公司 Facilitating analysis and interpretation of associated visible light and infrared (IR) image information
CN111539996A (en) * 2020-03-25 2020-08-14 深圳奇迹智慧网络有限公司 Thermal imaging processing method, thermal imaging processing device, computer equipment and storage medium
CN111475130A (en) * 2020-03-27 2020-07-31 深圳光启超材料技术有限公司 Display method of track information, head-mounted device, storage medium and electronic device
CN211603731U (en) * 2020-04-03 2020-09-29 江苏集萃有机光电技术研究所有限公司 Virtual reality or augmented reality display system and wearable equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Design of a Pupil-Matched Occlusion-Capable Optical See-Through Wearable Display;Austin Wilson等;《 IEEE Transactions on Visualization and Computer Graphics ( Early Access )》;20210427;1-15 *
视场超出目标的红外测温误差修正方法研究;王超群等;《激光与红外》;20151031;第45卷(第10期);1211-1215 *

Also Published As

Publication number Publication date
CN113409218A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
US11765331B2 (en) Immersive display and method of operating immersive display for real-world object alert
CA2884855C (en) Face mounted extreme environment thermal sensor system
US10365490B2 (en) Head-mounted display, head-up display and picture displaying method
US20200020161A1 (en) Virtual Barrier Objects
JP2015231828A (en) Display device and movable body
JPH0854282A (en) Head-mounted display device
JP2001269417A (en) Emergent flight safety device
JP2008502992A (en) Communication method that gives image information
US7038639B1 (en) Display system for full face masks
JP2008504597A (en) Apparatus and method for displaying peripheral image
JPH07167668A (en) Equipment for offering information on running
CN110708533A (en) Visual assistance method based on augmented reality and intelligent wearable device
EP1046411B1 (en) Viewing system
CN113409218B (en) Wearable protective tool and scene presenting method for wearable protective tool
JP2009183473A (en) Visual line direction detection device, and visual line direction detection method
CN110895676A (en) Dynamic object tracking
US12019441B2 (en) Display control system, display control device and display control method
US6837240B1 (en) Display system upgrade for a full face mask
US11694345B2 (en) Moving object tracking using object and scene trackers
US20020053101A1 (en) Helmet
WO2016135448A1 (en) Emergency guidance system and method
JPH04370207A (en) Helmet equipped with display
WO2021172333A1 (en) Vehicle display device
CN219515404U (en) AR intelligent fire-fighting helmet
KR102516562B1 (en) Method for generating infrared thermal image including depth information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant