CN115100556A - Augmented reality method and device based on image segmentation and fusion and electronic equipment - Google Patents
Augmented reality method and device based on image segmentation and fusion and electronic equipment Download PDFInfo
- Publication number
- CN115100556A CN115100556A CN202211023758.8A CN202211023758A CN115100556A CN 115100556 A CN115100556 A CN 115100556A CN 202211023758 A CN202211023758 A CN 202211023758A CN 115100556 A CN115100556 A CN 115100556A
- Authority
- CN
- China
- Prior art keywords
- image
- target image
- target
- night vision
- thermal imaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000003709 image segmentation Methods 0.000 title claims abstract description 42
- 230000004927 fusion Effects 0.000 title claims abstract description 32
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 26
- 230000004297 night vision Effects 0.000 claims abstract description 86
- 238000001931 thermography Methods 0.000 claims abstract description 67
- 238000001514 detection method Methods 0.000 claims abstract description 23
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 18
- 239000011521 glass Substances 0.000 claims abstract description 8
- 230000003287 optical effect Effects 0.000 claims abstract description 7
- 238000004590 computer program Methods 0.000 claims description 8
- 239000000463 material Substances 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000010438 heat treatment Methods 0.000 claims 1
- 230000007613 environmental effect Effects 0.000 abstract description 3
- 230000009471 action Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000009193 crawling Effects 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/30—Transforming light or analogous information into electric information
- H04N5/33—Transforming infrared radiation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The application relates to an augmented reality method, device and electronic equipment based on image segmentation and fusion, be provided with shimmer night vision camera and infrared camera on AR glasses to through the physical mode calibration optical axis, ensure that the camera end of shimmer night vision camera and infrared camera aims same position, this method includes: acquiring a low-light-level night vision device image output by a low-light-level night vision camera and an infrared thermal imaging image output by an infrared camera; detecting a target image in the infrared thermal imaging image according to a preset target detection method, and judging whether the target image is detected; if so, segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm; and based on a preset superposition rule, superposing the segmented target image to the low-light night vision device image to form a display image. The method and the device are convenient for comprehensively perceiving the target and the environmental information.
Description
Technical Field
The present application relates to the field of image fusion, and in particular, to a method, an apparatus, and an electronic device for augmented reality based on image segmentation and fusion.
Background
The night vision device uses the image intensifier as a night external sighting device of a core device, does not use an infrared searchlight to illuminate a target when in work, and observes and aims the target by utilizing light rays reflected by the target under weak illumination to be intensified into a visible image which can be sensed by human eyes on a fluorescent screen through the image intensifier.
At present, the common night vision device mainly comprises a low-light level night vision device and an infrared thermal imaging night vision device, weak natural light is radiated through the surface of a target, enters the low-light level night vision device, is focused on a photocathode surface of an image intensifier (superposed with a rear focal surface of an objective lens) under the action of a high-light level objective lens, excites photoelectrons, and changes a remote target illuminated by the weak natural light into a visible light image suitable for human eyes to observe under the action of the intensifier; the infrared thermal imaging night vision device forms a thermal image by means of infrared radiation of a target, and is also called a thermal imager.
The low-light level night vision device can help a user to check the surrounding environment, but a target is difficult to find, and the infrared thermal imaging night vision device is beneficial to the user to find the target with infrared characteristics, but environment details are difficult to see.
Disclosure of Invention
In order to comprehensively sense target and environmental information, the application provides a method and a device for augmented reality based on image segmentation and fusion and electronic equipment.
In a first aspect, the present application provides an augmented reality method based on image segmentation and fusion, which adopts the following technical scheme:
the utility model provides a method of augmented reality based on image segmentation and fusion, this method is applied to AR glasses, is provided with shimmer night vision camera and infrared camera on AR glasses to through the calibration optical axis of physics mode, ensure that shimmer night vision camera and infrared camera's camera end aim same position, this method includes:
acquiring a low-light-level night vision device image output by a low-light-level night vision camera and an infrared thermal imaging image output by an infrared camera;
detecting a target image in the infrared thermal imaging image according to a preset target detection method, and judging whether the target image is detected;
if so, segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm;
and based on a preset superposition rule, superposing the segmented target image to the low-light night vision device image to form a display image.
By adopting the technical scheme, the low-light-level night vision device image output by the low-light-level night vision camera and the infrared thermal imaging image output by the infrared camera are obtained, the target image in the infrared thermal imaging image is detected in real time according to a preset target detection method after the infrared thermal imaging image is obtained, whether the target image is detected or not is judged, the target image is segmented from the infrared thermal imaging image after the target image is detected, then the segmented target image is superposed to the low-light-level night vision device image to form a display image based on a preset superposition rule, the scheme combines the advantages of easy-to-observe environment of the low-light-level night vision device image and easy-to-find target of the infrared thermal imaging image, the information is known without frequent switching of a video source of a user, and the target and environment information can be comprehensively sensed.
Optionally, the method for detecting a target image in an infrared thermal imaging image according to a preset target detection method and determining whether the target image is detected specifically includes:
acquiring a target image material, converting the target image material into a first characteristic value according to a preset conversion algorithm, and storing the first characteristic value;
detecting a target image according to YOLO or Faster R-CNN;
converting the detected target image into a second characteristic value according to a preset conversion algorithm;
and comparing the first characteristic value with the second characteristic value, and judging that the target image is detected when the similarity of the first characteristic value and the second characteristic value reaches a threshold value.
Optionally, after the method of detecting a target image in an infrared thermal imaging image according to a preset target detection method and determining whether the target image is detected, the method further includes:
if not, detecting a target image in the infrared thermal imaging image according to a preset target detection method, and judging whether the target image is detected.
Optionally, the preset Image segmentation algorithm is any one of Trimap-based, Adobe Deep Image matching and Background matching.
Optionally, according to a preset determination rule, a first base point is determined in the infrared thermal imaging image, and a second base point is determined in the low-light night vision device image;
establishing a first plane rectangular coordinate system by taking a first base point as an origin based on a preset rectangular coordinate system establishing rule, and establishing a second plane rectangular coordinate system by taking a second base point as the origin;
determining the coordinates of the target image in a first plane rectangular coordinate system;
acquiring the pixel number of an infrared thermal imaging image in the vertical direction and the pixel number of an infrared thermal imaging image in the horizontal direction, and the pixel number of a low-light night vision device image in the vertical direction and the pixel number of the infrared thermal imaging image in the horizontal direction;
based on a preset adjusting mode, adjusting a target image according to the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the infrared thermal imaging image and the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the low-light night vision device image to form a target image to be fused;
determining the coordinates of the target image to be fused in a second rectangular coordinate system according to the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the infrared thermal imaging image, the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the low-light level night vision device image and the coordinates of the target image in the first rectangular coordinate system on the basis of a preset calculation mode;
and superposing the target image to the low-light level night vision device image according to the coordinate of the target image to be fused in the second rectangular coordinate system to form a display image.
Optionally, after the method of superimposing the target image onto the low-light level night vision device image according to the coordinate of the target image to be fused in the second rectangular coordinate system to form the display image, the method further includes:
acquiring coordinates of a target image to be fused which is manually input;
and adjusting the position of the target image to be fused in the low-light night vision device image according to the manually input coordinates of the target image to be fused to form a final image.
Optionally, the preset calculation manner is as follows: x is the number of 1 /w 1 =x 2 /w 2 ,y 1 /h 1 =y 2 /h 2 (ii) a Wherein x is 1 Is the abscissa, w, of the target image in a first rectangular coordinate system 1 The number of pixels in the horizontal direction of the low-light night vision device image is; x is the number of 2 For the abscissa, w, of the target image in a second rectangular coordinate system 2 The pixel number of the infrared thermal imaging image in the horizontal direction is shown; y is 1 Is the vertical coordinate h of the target image in the first rectangular coordinate system 1 Number of pixels in vertical direction, y, of the low-light level night vision device image 2 Ordinate, h, of the target image in a second rectangular coordinate system 2 The number of pixels in the vertical direction of the infrared thermographic image.
In a second aspect, the present application provides an augmented reality apparatus based on image segmentation and fusion, which adopts the following technical solution:
an apparatus for augmented reality based on image segmentation and fusion, comprising:
the acquisition module is used for acquiring a low-light-level night vision device image output by the low-light-level night vision camera and an infrared thermal imaging image output by the infrared camera;
the target detection module is used for detecting a target image in the infrared thermal imaging image according to a preset target detection method and judging whether the target image is detected or not;
the image segmentation module is used for segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm if the target image is the infrared thermal imaging image;
and the image fusion module is used for superposing the segmented target image to the low-light level night vision device image to form a display image based on a preset superposition rule.
In a third aspect, the present application provides an electronic device, which adopts the following technical solution:
an electronic device comprising a memory and a processor, the memory having stored thereon a computer program of an image segmentation and fusion based augmented reality method, which is loadable and executable by the processor.
To sum up, the application comprises the following beneficial technical effects:
the method comprises the steps of obtaining a low-light-level night vision device image output by a low-light-level night vision camera and an infrared thermal imaging image output by an infrared camera, detecting a target image in the infrared thermal imaging image in real time according to a preset target detection method after obtaining the infrared thermal imaging image, judging whether the target image is detected, segmenting the target image from the infrared thermal imaging image after detecting the target image, overlaying the segmented target image to the low-light-level night vision device image to form a display image based on a preset overlaying rule, combining the advantages of easy-to-observe environment of the low-light-level night vision device image and easy-to-find target of the infrared thermal imaging image according to the scheme, and understanding the information without frequent switching of video sources of a user, so that the target and the environment information can be comprehensively perceived.
Drawings
Fig. 1 is a flowchart of an augmented reality method based on image segmentation and fusion provided in the present application.
Fig. 2 is a schematic diagram of coordinate system establishment of an augmented reality method based on image segmentation and fusion provided in the present application.
Fig. 3 is a schematic structural diagram of an overall device for augmented reality based on image segmentation and fusion provided by the present application.
Fig. 4 is a schematic structural diagram of an electronic device provided in the present application.
Description of reference numerals: 200. an augmented reality device based on image segmentation and fusion; 201. an acquisition module; 202. a target detection module; 203. an image segmentation module; 204. an image fusion module; 301. a CPU; 302. a ROM; 303. a RAM; 304. an I/O interface; 305. an input section; 306. an output section; 307. a storage section; 308. a communication section; 309. a driver; 310. a removable media.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless specifically defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
At present, night-vision devices commonly used by people are mainly low-light-level night-vision devices and infrared thermal imaging night-vision devices, weak natural light is radiated through the surface of a target, enters the low-light-level night-vision devices, is focused on a photocathode surface of an image intensifier (which is overlapped with a rear focal surface of the objective lens) under the action of a high-light-power objective lens, excites photoelectrons, and is accelerated, focused and imaged under the action of an electronic optical system in the image intensifier, bombards a fluorescent screen of the image intensifier at a very high speed and excites strong enough visible light, so that a remote target illuminated by the weak natural light is changed into a visible light image suitable for human eye observation, and the visible light image is further amplified through an ocular lens to realize more effective visual observation; the infrared night imaging night vision device forms a thermal image by depending on the infrared radiation of a target, so the infrared night imaging night vision device is also called as a thermal imager; the low-light level night vision device can help a user to check the surrounding environment, but the user can hardly find a target, and the infrared thermal imaging night vision device is beneficial to the user to find the target with infrared characteristics (such as a human body and a vehicle), but environment details are difficult to see. Therefore, when the low-light level night vision device and the infrared thermal imaging night vision device are used independently, the problems that the type of displayed information is single and comprehensive perception of target and environment information is not facilitated exist.
In order to comprehensively sense target and environmental information, the embodiment of the application discloses a method and a device for augmented reality based on image segmentation and fusion and electronic equipment.
Referring to fig. 1, the augmented reality method based on image segmentation and fusion includes:
s101: and acquiring a low-light-level night vision device image output by the low-light-level night vision camera and an infrared thermal imaging image output by the infrared camera.
Specifically, a low-light-level night vision camera and an infrared camera are arranged on the AR glasses, the low-light-level night vision camera is used for collecting images of a low-light-level night vision device, and the infrared camera is used for collecting infrared thermal imaging images; the optical axes of the two cameras are calibrated in a physical mode after the low-light-level night vision camera and the infrared camera are installed, and the aiming of the low-light-level night vision camera and the infrared camera is ensured to be the same point. In this embodiment, the low-light night vision camera and the infrared camera aim at the same point, that is, after the optical axis is calibrated, when the low-light night vision camera and the infrared camera shoot the same target within a preset distance range, the deviation of the central point of an image formed by the shot target is within a preset threshold range, for example, 0 to 3%, so that the two cameras can obtain images with the same field angle.
Be provided with the treater in the AR glasses, glimmer night-time vision device image and the infrared thermal imaging image that infrared camera gathered that glimmer night-time vision camera gathered are uploaded to the treater to the storage. In this embodiment, the low-light level night vision device may further adopt a high-sensitivity RGB camera, a full-color night vision camera, and the like, which is not limited herein.
S102: and detecting a target image in the infrared thermal imaging image according to a preset target detection method, and judging whether the target image is detected.
Specifically, a target image material is obtained, wherein the target image material is image information of a detected target in various states, and the target is a target with infrared characteristics, such as a human body, a vehicle and the like. Crawling target image big data in a crawler crawling manner, converting the obtained target image into a first characteristic value, namely an encrypted character string, according to a preset conversion algorithm, and storing to form a target identification library; according to common mature algorithms such as Faster R-CNN or YOLO and the like, the infrared thermal imaging image input, the candidate area generation, the feature extraction, the target type judgment of the feature are sequentially completed, finally, the detection result is output, the detection result is a marked frame with the size consistent with the size of the target, and the image covered by the marked frame is the target image.
And after the target image is determined, converting the target image into a second characteristic value, comparing the second characteristic value with the first characteristic value, judging that the target is detected when the similarity between the first characteristic value and the second characteristic value reaches a target detection threshold, executing the step S103, and continuing to execute the step S101 when the similarity between the first characteristic value and the second characteristic value does not reach the target detection threshold.
S103: and segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm.
Specifically, after the target Image is determined, the target Image is extracted from the infrared thermal imaging Image by using a preset Image segmentation algorithm, in this embodiment, the preset Image segmentation algorithm may be any one of Trimap-based, Adobe Deep Image matching and Background matching, which is not limited herein, and the Trimap-based, Adobe Deep Image matching and Background matching are mature Image segmentation algorithms, and specific processing flows are not described herein in any detail.
S104: and based on a preset superposition rule, superposing the segmented target image to the low-light night vision device image to form a display image.
Referring to fig. 1 and 2, specifically, after the target image is determined, a first base point is determined in the infrared thermal imaging image and a second base point is determined in the low-light night vision device image according to a preset determination rule.
After the first base point and the second base point are determined, a first rectangular coordinate system is established in the infrared thermal imaging image by taking the first base point as an original point according to a preset rectangular coordinate system establishing rule, a second rectangular coordinate system is established in the low-light level night vision device image by taking the second base point as the original point, specifically, the right side of the original point is taken as the positive direction of an X axis, and the lower side of the original point is taken as the positive direction of a Y axis.
Acquiring pixel number h in vertical direction of infrared thermal imaging image 2 And the number of pixels w in the horizontal direction 2 And the number of pixels h in the vertical direction of the low-light night vision device image 1 And the number of pixels w in the horizontal direction 1 After the first rectangular coordinate system is determined, determining the distance x from the left boundary of the target image to the Y axis 1 The distance y of the upper boundary of the target image from the X axis is the abscissa of the target image 1 The distances in the present embodiment are each expressed in pixel numbers as the abscissa of the target image.
After the target image is determined, acquiring the pixel number a in the horizontal direction and the pixel number b in the vertical direction of the target image, and according to a formula: a/w 2 =c/ w 1 ,b/ h 2 =d/ h 1 And c is the number of pixels of the target image to be fused in the horizontal direction, d is the number of pixels of the target image to be fused in the vertical direction, and the target image is adjusted according to c and d after the c and the d are determined to form the target image to be fused.
After determining the horizontal and vertical coordinates of the target image in the first rectangular coordinate system, according to the formula: x is the number of 1 /w 1 =x 2 /w 2 ,y 1 /h 1 =y 2 /h 2 (ii) a Determining fusion to be performedCoordinates of the target image in a second rectangular coordinate system, wherein w 1 The number of pixels in the horizontal direction of the low-light level night vision device image is set; x is the number of 2 For the abscissa, w, of the target image in a second rectangular coordinate system 2 The pixel number of the infrared thermal imaging image in the horizontal direction is shown; h is 1 Number of pixels in vertical direction, y, of the low-light level night vision device image 2 Ordinate, h, of the target image in a second rectangular coordinate system 2 The number of pixels in the vertical direction of the infrared thermographic image.
And adding the target image to be fused into the second rectangular coordinate system according to the coordinate of the target image to be fused in the second rectangular coordinate system, so that the target image to be fused is superposed into the low-light level night vision device image to form a display image.
After the display image is formed, the coordinates of the target image to be fused, which are manually input, can be obtained, the position of the target image to be fused in the display image is adjusted according to the input coordinates of the target image to be fused, and the worker can adjust the position of the target image to be fused in a manual calibration mode, so that the position of the target image to be fused in the display image is more accurate. The coordinate input of the target image to be fused can be realized through a touch screen or a key of the AR glasses.
The embodiment of the application discloses an augmented reality interaction device based on image segmentation and fusion.
Referring to fig. 3, an apparatus 200 for augmented reality based on image segmentation and fusion includes,
the acquisition module 201 is configured to acquire a low-light-level night vision device image output by a low-light-level night vision camera and an infrared thermal imaging image output by an infrared camera;
the target detection module 202 is configured to detect a target image in the infrared thermal imaging image according to a preset target detection method, and determine whether the target image is detected;
the image segmentation module 203 is used for segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm if the target image is the infrared thermal imaging image;
and the image fusion module 204 is configured to superimpose the segmented target image onto the low-light night vision device image to form a display image based on a preset superimposition rule.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
An electronic apparatus is also disclosed in an embodiment of the present application, and referring to fig. 4, the electronic apparatus includes a Central Processing Unit (CPU)301 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage section 307 into a Random Access Memory (RAM) 303. In the RAM303, various programs and data necessary for system operation are also stored. The CPU301, ROM302, and RAM303 are connected to each other via a bus. An I/O interface 304 is also connected to the bus.
The following components are connected to the I/O interface 304: an input portion 305 including a keyboard, a mouse, and the like; an output section 306 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 307 including a hard disk and the like; and a communication section 308 including a network interface card such as a LAN card, a modem, or the like. The communication section 308 performs communication processing via a network such as the internet. Drivers 309 are also connected to the I/O interface 304 as needed. A removable medium 310 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 309 as necessary, so that a computer program read out therefrom is mounted into the storage section 307 as necessary.
In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart fig. 1 may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 308 and/or installed from the removable medium 310. The computer program, when executed by the Central Processing Unit (CPU)301, performs the above-described functions defined in the apparatus of the present application.
The foregoing is a preferred embodiment of the present application and is not intended to limit the scope of the application in any way, and any features disclosed in this specification (including the abstract and drawings) may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.
Claims (9)
1. The utility model provides a method for augmented reality based on image segmentation and fusion, this method is applied to AR glasses, is provided with shimmer night vision camera and infrared camera on AR glasses to through the calibration optical axis of physics mode, ensure that the camera end of shimmer night vision camera and infrared camera aims same position, its characterized in that: the method comprises the following steps:
acquiring a low-light-level night vision device image output by a low-light-level night vision camera and an infrared thermal imaging image output by an infrared camera;
detecting a target image in the infrared thermal imaging image according to a preset target detection method, and judging whether the target image is detected;
if so, segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm;
and based on a preset superposition rule, superposing the segmented target image to the low-light night vision device image to form a display image.
2. The method for augmented reality based on image segmentation and fusion according to claim 1, wherein: the method for detecting a target image in the infrared thermal imaging image according to a preset target detection method and judging whether the target image is detected specifically comprises the following steps:
acquiring a target image material, converting the target image material into a first characteristic value according to a preset conversion algorithm, and storing the first characteristic value;
detecting a target image according to YOLO or Faster R-CNN;
converting the detected target image into a second characteristic value according to a preset conversion algorithm;
and comparing the first characteristic value with the second characteristic value, and judging that the target image is detected when the similarity of the first characteristic value and the second characteristic value reaches a threshold value.
3. The image segmentation and fusion based augmented reality method of claim 2, wherein: after the method of detecting a target image in the infrared thermal imaging image according to a preset target detection method and judging whether the target image is detected, the method further comprises the following steps:
if not, detecting a target image in the infrared thermal imaging image according to a preset target detection method, and judging whether the target image is detected.
4. The image segmentation and fusion based augmented reality method of claim 3, wherein: the preset Image segmentation algorithm is any one of Trimap-based, Adobe Deep Image matching and Background matching.
5. The method for augmented reality based on image segmentation and fusion according to claim 1, wherein: according to a preset determination rule, a first base point is determined in the infrared thermal imaging image, and a second base point is determined in the low-light level night vision device image;
establishing a first plane rectangular coordinate system by taking a first base point as an origin based on a preset rectangular coordinate system establishing rule, and establishing a second plane rectangular coordinate system by taking a second base point as the origin;
determining the coordinates of the target image in a first plane rectangular coordinate system;
acquiring the pixel number of an infrared thermal imaging image in the vertical direction and the pixel number of an infrared thermal imaging image in the horizontal direction, and the pixel number of a low-light night vision device image in the vertical direction and the pixel number of the infrared thermal imaging image in the horizontal direction;
based on a preset adjusting mode, adjusting a target image according to the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the infrared thermal imaging image and the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the low-light night vision device image to form a target image to be fused;
determining the coordinates of the target image to be fused in a second rectangular coordinate system according to the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the infrared thermal imaging image, the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the low-light level night vision device image and the coordinates of the target image in the first rectangular coordinate system on the basis of a preset calculation mode;
and superposing the target image to the low-light level night vision device image according to the coordinate of the target image to be fused in the second rectangular coordinate system to form a display image.
6. The image segmentation and fusion based augmented reality method of claim 5, wherein: and superposing the target image to the low-light night vision device image according to the coordinate of the target image to be fused in the second rectangular coordinate system, and forming a display image, wherein the method also comprises the following steps:
acquiring coordinates of a target image to be fused which is manually input;
and adjusting the position of the target image to be fused in the low-light night vision device image according to the manually input coordinates of the target image to be fused to form a final image.
7. The image segmentation and fusion based augmented reality method of claim 5, wherein: the preset calculation mode is as follows: x is the number of 1 /w 1 =x 2 /w 2 ,y 1 /h 1 =y 2 /h 2 (ii) a Wherein x is 1 Is the abscissa, w, of the target image in a first rectangular coordinate system 1 The number of pixels in the horizontal direction of the low-light night vision device image is; x is the number of 2 For the abscissa, w, of the target image in a second rectangular coordinate system 2 The pixel number of the infrared thermal imaging image in the horizontal direction is set; y is 1 Is the vertical coordinate h of the target image in the first rectangular coordinate system 1 Number of pixels in vertical direction, y, of the low-light night vision device image 2 Ordinate, h, of the target image in a second rectangular coordinate system 2 Infrared heating to produceLike the number of pixels in the vertical direction of the image.
8. An augmented reality device based on image segmentation and fusion is characterized in that: the method comprises the following steps:
the acquisition module (201) is used for acquiring a low-light-level night vision device image output by the low-light-level night vision camera and an infrared thermal imaging image output by the infrared camera;
the target detection module (202) is used for detecting a target image in the infrared thermal imaging image according to a preset target detection method and judging whether the target image is detected;
the image segmentation module (203) is used for segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm if the target image is detected;
and the image fusion module (204) is used for superposing the segmented target image to the low-light night vision device image to form a display image based on a preset superposition rule.
9. An electronic device, characterized in that: comprising a memory and a processor, said memory having stored thereon a computer program which can be loaded by the processor and which performs the method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211023758.8A CN115100556B (en) | 2022-08-25 | 2022-08-25 | Augmented reality method and device based on image segmentation and fusion and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211023758.8A CN115100556B (en) | 2022-08-25 | 2022-08-25 | Augmented reality method and device based on image segmentation and fusion and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115100556A true CN115100556A (en) | 2022-09-23 |
CN115100556B CN115100556B (en) | 2022-11-22 |
Family
ID=83300582
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211023758.8A Active CN115100556B (en) | 2022-08-25 | 2022-08-25 | Augmented reality method and device based on image segmentation and fusion and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115100556B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116993729A (en) * | 2023-09-26 | 2023-11-03 | 南京铂航电子科技有限公司 | Night vision device imaging system and method based on second harmonic |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101673396A (en) * | 2009-09-07 | 2010-03-17 | 南京理工大学 | Image fusion method based on dynamic object detection |
CN101853492A (en) * | 2010-05-05 | 2010-10-06 | 浙江理工大学 | Method for fusing night-viewing twilight image and infrared image |
CN109618087A (en) * | 2019-01-28 | 2019-04-12 | 北京晶品特装科技有限责任公司 | A kind of infrared and low-light fusion night vision device having precision target positioning function |
CN112053314A (en) * | 2020-09-04 | 2020-12-08 | 深圳市迈测科技股份有限公司 | Image fusion method and device, computer equipment, medium and thermal infrared imager |
CN113298177A (en) * | 2021-06-11 | 2021-08-24 | 华南理工大学 | Night image coloring method, device, medium, and apparatus |
CN216568562U (en) * | 2021-10-15 | 2022-05-24 | 北京红翼前锋科技发展有限公司 | Multifunctional intelligent helmet |
CN114912536A (en) * | 2022-05-26 | 2022-08-16 | 成都恒安警用装备制造有限公司 | Target identification method based on radar and double photoelectricity |
-
2022
- 2022-08-25 CN CN202211023758.8A patent/CN115100556B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101673396A (en) * | 2009-09-07 | 2010-03-17 | 南京理工大学 | Image fusion method based on dynamic object detection |
CN101853492A (en) * | 2010-05-05 | 2010-10-06 | 浙江理工大学 | Method for fusing night-viewing twilight image and infrared image |
CN109618087A (en) * | 2019-01-28 | 2019-04-12 | 北京晶品特装科技有限责任公司 | A kind of infrared and low-light fusion night vision device having precision target positioning function |
CN112053314A (en) * | 2020-09-04 | 2020-12-08 | 深圳市迈测科技股份有限公司 | Image fusion method and device, computer equipment, medium and thermal infrared imager |
CN113298177A (en) * | 2021-06-11 | 2021-08-24 | 华南理工大学 | Night image coloring method, device, medium, and apparatus |
CN216568562U (en) * | 2021-10-15 | 2022-05-24 | 北京红翼前锋科技发展有限公司 | Multifunctional intelligent helmet |
CN114912536A (en) * | 2022-05-26 | 2022-08-16 | 成都恒安警用装备制造有限公司 | Target identification method based on radar and double photoelectricity |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116993729A (en) * | 2023-09-26 | 2023-11-03 | 南京铂航电子科技有限公司 | Night vision device imaging system and method based on second harmonic |
CN116993729B (en) * | 2023-09-26 | 2024-03-29 | 南京铂航电子科技有限公司 | Night vision device imaging system and method based on second harmonic |
Also Published As
Publication number | Publication date |
---|---|
CN115100556B (en) | 2022-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018076732A1 (en) | Method and apparatus for merging infrared image and visible light image | |
EP0932114B1 (en) | A method of and apparatus for detecting a face-like region | |
CN107635129B (en) | Three-dimensional trinocular camera device and depth fusion method | |
KR100776801B1 (en) | Gesture recognition method and system in picture process system | |
CN107992857A (en) | A kind of high-temperature steam leakage automatic detecting recognition methods and identifying system | |
CN111462128B (en) | Pixel-level image segmentation system and method based on multi-mode spectrum image | |
JP2000082147A (en) | Method for detecting human face and device therefor and observer tracking display | |
CN110189294B (en) | RGB-D image significance detection method based on depth reliability analysis | |
JP2008123113A (en) | Pedestrian detection device | |
CN114114312A (en) | Three-dimensional target detection method based on fusion of multi-focal-length camera and laser radar | |
CN109035307B (en) | Set area target tracking method and system based on natural light binocular vision | |
CN105869115B (en) | A kind of depth image super-resolution method based on kinect2.0 | |
CN115100556B (en) | Augmented reality method and device based on image segmentation and fusion and electronic equipment | |
CN110796032A (en) | Video fence based on human body posture assessment and early warning method | |
CN113573035A (en) | AR-HUD brightness self-adaptive adjusting method based on vision | |
CN113762161A (en) | Intelligent obstacle monitoring method and system | |
JP4203279B2 (en) | Attention determination device | |
Shi et al. | A method for detecting pedestrian height and distance based on monocular vision technology | |
JP7092616B2 (en) | Object detection device, object detection method, and object detection program | |
CN109711352A (en) | Vehicle front road environment based on geometry convolutional neural networks has an X-rayed cognitive method | |
CN110909571A (en) | High-precision face recognition space positioning method | |
CN114295108A (en) | Distance measurement method and system for external equipment and infrared telescope | |
CN115937776A (en) | Monitoring method, device, system, electronic equipment and computer readable storage medium | |
CN110389390B (en) | Large-view-field infrared shimmer naturalness color fusion system | |
KR20100081099A (en) | Apparatus and method for out-focasing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |