CN115100556B - Augmented reality method and device based on image segmentation and fusion and electronic equipment - Google Patents

Augmented reality method and device based on image segmentation and fusion and electronic equipment Download PDF

Info

Publication number
CN115100556B
CN115100556B CN202211023758.8A CN202211023758A CN115100556B CN 115100556 B CN115100556 B CN 115100556B CN 202211023758 A CN202211023758 A CN 202211023758A CN 115100556 B CN115100556 B CN 115100556B
Authority
CN
China
Prior art keywords
image
target image
target
night vision
thermal imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211023758.8A
Other languages
Chinese (zh)
Other versions
CN115100556A (en
Inventor
刘天一
吴斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing LLvision Technology Co ltd
Original Assignee
Beijing LLvision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing LLvision Technology Co ltd filed Critical Beijing LLvision Technology Co ltd
Priority to CN202211023758.8A priority Critical patent/CN115100556B/en
Publication of CN115100556A publication Critical patent/CN115100556A/en
Application granted granted Critical
Publication of CN115100556B publication Critical patent/CN115100556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to an augmented reality method, device and electronic equipment based on image segmentation and fusion, be provided with shimmer night vision camera and infrared camera on AR glasses to through the physical mode calibration optical axis, ensure that the camera end of shimmer night vision camera and infrared camera aims same position, this method includes: acquiring a low-light-level night vision device image output by a low-light-level night vision camera and an infrared thermal imaging image output by an infrared camera; detecting a target image in the infrared thermal imaging image according to a preset target detection method, and judging whether the target image is detected; if so, segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm; and based on a preset superposition rule, superposing the segmented target image to the low-light night vision device image to form a display image. The method and the device are convenient for comprehensively perceiving the target and the environmental information.

Description

Augmented reality method and device based on image segmentation and fusion and electronic equipment
Technical Field
The present application relates to the field of image fusion, and in particular, to a method and an apparatus for augmented reality based on image segmentation and fusion, and an electronic device.
Background
The night vision device uses the image intensifier as the outer sighting device at night, when it is working, it doesn't use infrared searchlight to illuminate target, but uses the reflected light of target under weak illumination to make it pass through the image intensifier and intensify it into visible image which can be sensed by human eyes to observe and aim at target.
At present, the common night vision device mainly comprises a low-light level night vision device and an infrared thermal imaging night vision device, weak natural light is radiated through the surface of a target, enters the low-light level night vision device, is focused on a photocathode surface of an image intensifier (superposed with a rear focal surface of an objective lens) under the action of a high-light level objective lens, excites photoelectrons, and changes a remote target illuminated by the weak natural light into a visible light image suitable for human eyes to observe under the action of the intensifier; infrared thermal imaging night vision devices rely on the infrared radiation of the target itself to form a "thermal image," and are also known as thermal imagers.
The low-light level night vision device can help a user to check the surrounding environment, but a target is difficult to find, and the infrared thermal imaging night vision device is beneficial to the user to find the target with infrared characteristics, but environment details are difficult to see.
Disclosure of Invention
In order to facilitate comprehensive perception of target and environmental information, the application provides a method and a device for augmented reality based on image segmentation and fusion and electronic equipment.
In a first aspect, the present application provides an augmented reality method based on image segmentation and fusion, which adopts the following technical scheme:
the utility model provides a method of augmented reality based on image segmentation and fusion, this method is applied to AR glasses, is provided with shimmer night vision camera and infrared camera on AR glasses to through the calibration optical axis of physics mode, ensure that shimmer night vision camera and infrared camera's camera end aim same position, this method includes:
acquiring a low-light-level night vision device image output by a low-light-level night vision camera and an infrared thermal imaging image output by an infrared camera;
detecting a target image in the infrared thermal imaging image according to a preset target detection method, and judging whether the target image is detected;
if yes, segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm;
and based on a preset superposition rule, superposing the segmented target image to a low-light level night vision device image to form a display image.
By adopting the technical scheme, the low-light-level night vision device image output by the low-light-level night vision camera and the infrared thermal imaging image output by the infrared camera are obtained, the target image in the infrared thermal imaging image is detected in real time according to the preset target detection method after the infrared thermal imaging image is obtained, whether the target image is detected or not is judged, the target image is segmented from the infrared thermal imaging image after the target image is detected, then the segmented target image is superposed to the low-light-level night vision device image to form a display image based on the preset superposition rule, the scheme combines the advantages of the low-light-level night vision device image easy-to-observe environment and the infrared thermal imaging image easy-to-find target, the information is known without frequent switching of a video source by a user, and the target and the environment information are conveniently and comprehensively sensed.
Optionally, the method for detecting a target image in an infrared thermal imaging image according to a preset target detection method and determining whether the target image is detected specifically includes:
acquiring a target image material, converting the target image material into a first characteristic value according to a preset conversion algorithm, and storing the first characteristic value;
detecting a target image according to YOLO or Faster R-CNN;
converting the detected target image into a second characteristic value according to a preset conversion algorithm;
and comparing the first characteristic value with the second characteristic value, and judging that the target image is detected when the similarity of the first characteristic value and the second characteristic value reaches a threshold value.
Optionally, after the method of detecting a target image in an infrared thermal imaging image according to a preset target detection method and determining whether the target image is detected, the method further includes:
if not, detecting a target image in the infrared thermal imaging image according to a preset target detection method, and judging whether the target image is detected.
Optionally, the preset Image segmentation algorithm is any one of Trimap-based, adobe Deep Image matching and Background matching.
Optionally, according to a preset determination rule, a first base point is determined in the infrared thermal imaging image, and a second base point is determined in the low-light night vision device image;
establishing a first plane rectangular coordinate system by taking a first base point as an origin based on a preset rectangular coordinate system establishing rule, and establishing a second plane rectangular coordinate system by taking a second base point as the origin;
determining the coordinates of the target image in a first plane rectangular coordinate system;
acquiring the pixel number of an infrared thermal imaging image in the vertical direction and the pixel number of an infrared thermal imaging image in the horizontal direction, and the pixel number of a low-light night vision device image in the vertical direction and the pixel number of the infrared thermal imaging image in the horizontal direction;
based on a preset adjusting mode, adjusting a target image according to the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the infrared thermal imaging image and the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the low-light night vision device image to form a target image to be fused;
determining the coordinates of the target image to be fused in a second rectangular coordinate system according to the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the infrared thermal imaging image, the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the low-light level night vision device image and the coordinates of the target image in the first rectangular coordinate system on the basis of a preset calculation mode;
and superposing the target image to the low-light level night vision device image according to the coordinate of the target image to be fused in the second rectangular coordinate system to form a display image.
Optionally, after the method of superimposing the target image onto the low-light level night vision device image according to the coordinate of the target image to be fused in the second rectangular coordinate system to form the display image, the method further includes:
acquiring coordinates of a target image to be fused which is manually input;
and adjusting the position of the target image to be fused in the low-light night vision device image according to the manually input coordinates of the target image to be fused to form a final image.
Optionally, the preset calculation method is as follows: x is a radical of a fluorine atom 1 /w 1 =x 2 /w 2 ,y 1 /h 1 =y 2 /h 2 (ii) a Wherein x is 1 For the abscissa, w, of the target image in a first rectangular coordinate system 1 The number of pixels in the horizontal direction of the low-light night vision device image is; x is the number of 2 For the abscissa, w, of the target image in a second rectangular coordinate system 2 The pixel number of the infrared thermal imaging image in the horizontal direction is set; y is 1 Is the vertical coordinate h of the target image in a first rectangular coordinate system 1 Number of pixels in vertical direction, y, of the low-light night vision device image 2 Ordinate, h, of the target image in a second rectangular coordinate system 2 The number of pixels in the vertical direction of the infrared thermographic image.
In a second aspect, the present application provides an augmented reality apparatus based on image segmentation and fusion, which adopts the following technical solution:
an apparatus for augmented reality based on image segmentation and fusion, comprising:
the acquisition module is used for acquiring a low-light-level night vision device image output by the low-light-level night vision camera and an infrared thermal imaging image output by the infrared camera;
the target detection module is used for detecting a target image in the infrared thermal imaging image according to a preset target detection method and judging whether the target image is detected or not;
the image segmentation module is used for segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm if the target image is the infrared thermal imaging image;
and the image fusion module is used for superposing the segmented target image to the low-light level night vision device image to form a display image based on a preset superposition rule.
In a third aspect, the present application provides an electronic device, which adopts the following technical solution:
an electronic device comprising a memory and a processor, the memory having stored thereon a computer program of an image segmentation and fusion based augmented reality method, which is loadable and executable by the processor.
To sum up, the application comprises the following beneficial technical effects:
the method comprises the steps of obtaining a low-light-level night vision device image output by a low-light-level night vision camera and an infrared thermal imaging image output by an infrared camera, detecting a target image in the infrared thermal imaging image in real time according to a preset target detection method after the infrared thermal imaging image is obtained, judging whether the target image is detected or not, segmenting the target image from the infrared thermal imaging image after the target image is detected, and then overlaying the segmented target image to the low-light-level night vision device image to form a display image based on a preset overlaying rule.
Drawings
Fig. 1 is a flowchart of an augmented reality method based on image segmentation and fusion provided in the present application.
Fig. 2 is a schematic diagram of establishing a coordinate system of the augmented reality method based on image segmentation and fusion provided by the present application.
Fig. 3 is a schematic structural diagram of an overall device for augmented reality based on image segmentation and fusion provided by the present application.
Fig. 4 is a schematic structural diagram of an electronic device provided in the present application.
Description of reference numerals: 200. an augmented reality device based on image segmentation and fusion; 201. an acquisition module; 202. a target detection module; 203. an image segmentation module; 204. an image fusion module; 301. a CPU; 302. a ROM; 303. a RAM; 304. an I/O interface; 305. an input section; 306. an output section; 307. a storage section; 308. a communication section; 309. a driver; 310. a removable media.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless specifically defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
At present, night-vision devices commonly used by people are mainly low-light-level night-vision devices and infrared thermal imaging night-vision devices, weak natural light is radiated through the surface of a target, enters the low-light-level night-vision devices, is focused on a photocathode surface of an image intensifier (which is overlapped with a rear focal surface of the objective lens) under the action of a high-light-power objective lens, excites photoelectrons, and is accelerated, focused and imaged under the action of an electronic optical system in the image intensifier, bombards a fluorescent screen of the image intensifier at a very high speed and excites strong enough visible light, so that a remote target illuminated by the weak natural light is changed into a visible light image suitable for human eye observation, and the visible light image is further amplified through an ocular lens to realize more effective visual observation; the infrared night imaging night vision device forms a thermal image by depending on the infrared radiation of a target, so the infrared night imaging night vision device is also called as a thermal imager; the low-light level night vision device can help a user to check the surrounding environment, but a target is difficult to find, and the infrared thermal imaging night vision device is beneficial to the user to find the target with infrared characteristics (such as a human body and a vehicle), but environmental details are difficult to see. Therefore, when the low-light level night vision device and the infrared thermal imaging night vision device are used independently, the problems that the type of displayed information is single and comprehensive perception of target and environmental information is not facilitated exist.
In order to comprehensively sense target and environmental information, the embodiment of the application discloses a method and a device for augmented reality based on image segmentation and fusion and electronic equipment.
Referring to fig. 1, the augmented reality method based on image segmentation and fusion includes:
s101: and acquiring a low-light-level night vision device image output by the low-light-level night vision camera and an infrared thermal imaging image output by the infrared camera.
Specifically, a low-light-level night vision camera and an infrared camera are arranged on the AR glasses, the low-light-level night vision camera is used for collecting images of the low-light-level night vision device, and the infrared camera is used for collecting infrared thermal imaging images; the optical axes of the two cameras are calibrated in a physical mode after the low-light-level night vision camera and the infrared camera are installed, and the aiming of the low-light-level night vision camera and the infrared camera is ensured to be the same point. In this embodiment, the low-light-level night vision camera and the infrared camera aim at the same point, that is, after the optical axis is calibrated, when the low-light-level night vision camera and the infrared camera shoot the same target within a preset distance range, the deviation of the central point of an image formed by the shot target is within a preset threshold range, for example, 0 to 3%, so that the two cameras can obtain images with the same field angle.
Be provided with the treater in the AR glasses, glimmer night-time vision device image and the infrared thermal imaging image that infrared camera gathered that glimmer night-time vision camera gathered are uploaded to the treater to the storage. In this embodiment, the low-light level night vision device may further adopt a high-sensitivity RGB camera, a full-color night vision camera, and the like, which is not limited herein.
S102: and detecting a target image in the infrared thermal imaging image according to a preset target detection method, and judging whether the target image is detected.
Specifically, a target image material is obtained, wherein the target image material is image information of a detected target in various states, and the target is a target with infrared characteristics, such as a human body, a vehicle and the like. Crawling target image big data in a crawler crawling manner, converting the obtained target image into a first characteristic value, namely an encrypted character string, according to a preset conversion algorithm, and storing to form a target identification library; according to common mature algorithms such as Faster R-CNN or YOLO and the like, the infrared thermal imaging image input, the candidate area generation, the feature extraction, the target type judgment of the feature are sequentially completed, and finally the detection result is output, wherein the detection result is a marking frame with the size consistent with the size of the target, and the image covered by the marking frame is the target image.
After the target image is determined, converting the target image into a second characteristic value, comparing the second characteristic value with the first characteristic value, judging that the target is detected when the similarity between the first characteristic value and the second characteristic value reaches a target detection threshold value, executing step S103, and continuing to execute step S101 when the similarity between the first characteristic value and the second characteristic value does not reach the target detection threshold value.
S103: and segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm.
In this embodiment, the preset Image segmentation algorithm may be any one of Trimap-based, adobe Deep Image matching and Background matching, which is not limited herein, and the Trimap-based, adobe Deep Image matching and Background matching are mature Image segmentation algorithms, and the specific processing flow is not described herein in detail.
S104: and based on a preset superposition rule, superposing the segmented target image to a low-light level night vision device image to form a display image.
Referring to fig. 1 and fig. 2, specifically, after the target image is determined, according to a preset determination rule, a first base point is determined in the infrared thermal imaging image, and a second base point is determined in the low-light level night vision device image.
After the first base point and the second base point are determined, a first rectangular coordinate system is established in the infrared thermal imaging image by taking the first base point as an original point according to a preset rectangular coordinate system establishing rule, a second rectangular coordinate system is established in the low-light level night vision device image by taking the second base point as the original point, specifically, the right side of the original point is taken as the positive direction of an X axis, and the lower side of the original point is taken as the positive direction of a Y axis.
Acquiring pixel number h in vertical direction of infrared thermal imaging image 2 And the number of pixels w in the horizontal direction 2 And the number of pixels h in the vertical direction of the low-light night vision device image 1 And the number of pixels w in the horizontal direction 1 After the first rectangular coordinate system is determined, determining the distance x of the left boundary of the target image from the Y axis 1 The distance y of the upper boundary of the target image from the X axis is the abscissa of the target image 1 The distances in the present embodiment are each expressed in number of pixels as the abscissa of the target image.
After the target image is determined, acquiring the pixel number a in the horizontal direction and the pixel number b in the vertical direction of the target image, and according to a formula: a/w 2 =c/ w 1 ,b/ h 2 =d/ h 1 And c is the number of pixels of the target image to be fused in the horizontal direction, d is the number of pixels of the target image to be fused in the vertical direction, and the target image is adjusted according to c and d after the c and the d are determined to form the target image to be fused.
After determining the horizontal and vertical coordinates of the target image in the first rectangular coordinate system, according to the formula: x is a radical of a fluorine atom 1 /w 1 =x 2 /w 2 ,y 1 /h 1 =y 2 /h 2 (ii) a Determining the coordinate of the target image to be fused in a second rectangular coordinate system, wherein w 1 The number of pixels in the horizontal direction of the low-light level night vision device image is set; x is the number of 2 For the abscissa, w, of the target image in a second rectangular coordinate system 2 The pixel number of the infrared thermal imaging image in the horizontal direction is shown; h is 1 Number of pixels in vertical direction, y, of the low-light level night vision device image 2 Ordinate, h, of the target image in a second rectangular coordinate system 2 The number of pixels in the vertical direction of the infrared thermographic image.
And adding the target image to be fused into the second rectangular coordinate system according to the coordinate of the target image to be fused in the second rectangular coordinate system, so that the target image to be fused is superposed into the low-light level night vision device image to form a display image.
After the display image is formed, the coordinates of the target image to be fused, which are manually input, can be obtained, the position of the target image to be fused in the display image is adjusted according to the input coordinates of the target image to be fused, and the worker can adjust the position of the target image to be fused in a manual calibration mode, so that the position of the target image to be fused in the display image is more accurate. The coordinate input of the target image to be fused can be realized through a touch screen or a key of the AR glasses.
The embodiment of the application discloses an augmented reality mutual device based on image segmentation and fusion.
Referring to fig. 3, an apparatus 200 for augmented reality based on image segmentation and fusion includes,
the acquisition module 201 is configured to acquire a low-light-level night vision device image output by a low-light-level night vision camera and an infrared thermal imaging image output by an infrared camera;
the target detection module 202 is configured to detect a target image in the infrared thermal imaging image according to a preset target detection method, and determine whether the target image is detected;
the image segmentation module 203 is used for segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm if the target image is the infrared thermal imaging image;
and the image fusion module 204 is configured to superimpose the segmented target image onto the low-light night vision device image to form a display image based on a preset superimposition rule.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
An electronic apparatus is also disclosed in an embodiment of the present application, and referring to fig. 4, the electronic apparatus includes a Central Processing Unit (CPU) 301 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage section 307 into a Random Access Memory (RAM) 303. In the RAM303, various programs and data necessary for system operation are also stored. The CPU301, ROM302, and RAM303 are connected to each other via a bus. An I/O interface 304 is also connected to the bus.
The following components are connected to the I/O interface 304: an input section 305 including a keyboard, a mouse, and the like; an output section 306 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 307 including a hard disk and the like; and a communication section 308 including a network interface card such as a LAN card, a modem, or the like. The communication section 308 performs communication processing via a network such as the internet. Drivers 309 are also connected to the I/O interface 304 as needed. A removable medium 310 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 309 as necessary, so that a computer program read out therefrom is mounted into the storage section 307 as necessary.
In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart fig. 1 may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 308 and/or installed from the removable medium 310. The above-described functions defined in the apparatus of the present application are executed when the computer program is executed by the Central Processing Unit (CPU) 301.
The foregoing is a preferred embodiment of the present application and is not intended to limit the scope of the present application in any way, and any features disclosed in this specification (including the abstract and drawings) may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.

Claims (8)

1. The utility model provides a method for augmented reality based on image segmentation and integration, this method is applied to AR glasses, is provided with shimmer night vision camera and infrared camera on AR glasses to through the calibration optical axis of physics mode, ensure that shimmer night vision camera and infrared camera's camera end aims same position, its characterized in that: the method comprises the following steps:
acquiring a low-light-level night vision device image output by a low-light-level night vision camera and an infrared thermal imaging image output by an infrared camera;
detecting a target image in the infrared thermal imaging image according to a preset target detection method, and judging whether the target image is detected;
if yes, segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm;
determining a first base point in the infrared thermal imaging image and a second base point in the low-light level night vision device image according to a preset determination rule;
establishing a first plane rectangular coordinate system by taking a first base point as an origin based on a preset rectangular coordinate system establishing rule, and establishing a second plane rectangular coordinate system by taking a second base point as the origin;
determining the coordinates of the target image in a first plane rectangular coordinate system;
acquiring the pixel number of an infrared thermal imaging image in the vertical direction and the pixel number of an infrared thermal imaging image in the horizontal direction, and the pixel number of a low-light night vision device image in the vertical direction and the pixel number of the infrared thermal imaging image in the horizontal direction;
based on a preset adjusting mode, adjusting a target image according to the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the infrared thermal imaging image and the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the low-light level night vision device image to form a target image to be fused;
determining the coordinates of the target image to be fused in a second rectangular coordinate system according to the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the infrared thermal imaging image, the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the low-light level night vision device image and the coordinates of the target image in the first rectangular coordinate system on the basis of a preset calculation mode;
and superposing the target image to the low-light level night vision device image according to the coordinate of the target image to be fused in the second rectangular coordinate system to form a display image.
2. The method for augmented reality based on image segmentation and fusion according to claim 1, wherein: the method for detecting a target image in the infrared thermal imaging image according to a preset target detection method and judging whether the target image is detected specifically comprises the following steps:
acquiring a target image material, converting the target image material into a first characteristic value according to a preset conversion algorithm, and storing the first characteristic value;
detecting a target image according to YOLO or Faster R-CNN;
converting the detected target image into a second characteristic value according to a preset conversion algorithm;
and comparing the first characteristic value with the second characteristic value, and judging that the target image is detected when the similarity of the first characteristic value and the second characteristic value reaches a threshold value.
3. The image segmentation and fusion based augmented reality method of claim 2, wherein: after the method of detecting a target image in the infrared thermal imaging image according to a preset target detection method and judging whether the target image is detected, the method further comprises the following steps:
if not, detecting a target image in the infrared thermal imaging image according to a preset target detection method, and judging whether the target image is detected.
4. The image segmentation and fusion based augmented reality method of claim 3, wherein: the preset Image segmentation algorithm is any one of Trimap-based, adobe Deep Image matching and Background matching.
5. The method for augmented reality based on image segmentation and fusion according to claim 1, wherein: according to the coordinates of the target image to be fused in the second rectangular coordinate system, the target image is superposed to the low-light level night vision device image, and after the method for forming the display image, the method further comprises the following steps:
acquiring coordinates of a target image to be fused which is manually input;
and adjusting the position of the target image to be fused in the low-light night vision device image according to the manually input coordinates of the target image to be fused to form a final image.
6. The image segmentation and fusion based augmented reality method of claim 1, wherein: the preset calculation mode is as follows: x is a radical of a fluorine atom 1 /w 1 =x 2 /w 2 ,y 1 /h 1 =y 2 /h 2 (ii) a Wherein x is 1 For the abscissa, w, of the target image in a first rectangular coordinate system 1 The number of pixels in the horizontal direction of the low-light level night vision device image is set; x is a radical of a fluorine atom 2 For the abscissa, w, of the target image in a second rectangular coordinate system 2 The pixel number of the infrared thermal imaging image in the horizontal direction is shown; y is 1 Is the vertical coordinate h of the target image in the first rectangular coordinate system 1 Number of pixels in vertical direction, y, of the low-light night vision device image 2 Ordinate, h, of the target image in a second rectangular coordinate system 2 The number of pixels in the vertical direction of the infrared thermographic image.
7. An augmented reality device based on image segmentation and fusion is characterized in that: the method comprises the following steps:
the acquisition module (201) is used for acquiring a low-light-level night vision device image output by the low-light-level night vision camera and an infrared thermal imaging image output by the infrared camera;
the target detection module (202) is used for detecting a target image in the infrared thermal imaging image according to a preset target detection method and judging whether the target image is detected;
the image segmentation module (203) is used for segmenting the detected target image from the infrared thermal imaging image according to a preset image segmentation algorithm if the target image is the infrared thermal imaging image;
an image fusion module (204) configured to:
determining a first base point in the infrared thermal imaging image and a second base point in the low-light level night vision device image according to a preset determination rule;
establishing a first plane rectangular coordinate system by taking a first base point as an origin based on a preset rectangular coordinate system establishing rule, and establishing a second plane rectangular coordinate system by taking a second base point as the origin;
determining the coordinates of the target image in a first plane rectangular coordinate system;
acquiring the number of pixels in the vertical direction and the number of pixels in the horizontal direction of an infrared thermal imaging image and the number of pixels in the vertical direction and the number of pixels in the horizontal direction of a low-light level night vision device image;
based on a preset adjusting mode, adjusting a target image according to the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the infrared thermal imaging image and the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the low-light level night vision device image to form a target image to be fused;
determining the coordinates of the target image to be fused in a second rectangular coordinate system according to the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the infrared thermal imaging image, the number of pixels in the vertical direction and the number of pixels in the horizontal direction of the low-light level night vision device image and the coordinates of the target image in the first rectangular coordinate system on the basis of a preset calculation mode;
and superposing the target image to the low-light level night vision device image according to the coordinate of the target image to be fused in the second rectangular coordinate system to form a display image.
8. An electronic device, characterized in that: comprising a memory and a processor, said memory having stored thereon a computer program which can be loaded by the processor and which performs the method of any of claims 1 to 6.
CN202211023758.8A 2022-08-25 2022-08-25 Augmented reality method and device based on image segmentation and fusion and electronic equipment Active CN115100556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211023758.8A CN115100556B (en) 2022-08-25 2022-08-25 Augmented reality method and device based on image segmentation and fusion and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211023758.8A CN115100556B (en) 2022-08-25 2022-08-25 Augmented reality method and device based on image segmentation and fusion and electronic equipment

Publications (2)

Publication Number Publication Date
CN115100556A CN115100556A (en) 2022-09-23
CN115100556B true CN115100556B (en) 2022-11-22

Family

ID=83300582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211023758.8A Active CN115100556B (en) 2022-08-25 2022-08-25 Augmented reality method and device based on image segmentation and fusion and electronic equipment

Country Status (1)

Country Link
CN (1) CN115100556B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993729B (en) * 2023-09-26 2024-03-29 南京铂航电子科技有限公司 Night vision device imaging system and method based on second harmonic

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673396B (en) * 2009-09-07 2012-05-23 南京理工大学 Image fusion method based on dynamic object detection
CN101853492B (en) * 2010-05-05 2012-07-04 浙江理工大学 Method for fusing night-viewing twilight image and infrared image
CN109618087A (en) * 2019-01-28 2019-04-12 北京晶品特装科技有限责任公司 A kind of infrared and low-light fusion night vision device having precision target positioning function
CN112053314B (en) * 2020-09-04 2024-02-23 深圳市迈测科技股份有限公司 Image fusion method, device, computer equipment, medium and thermal infrared imager
CN113298177B (en) * 2021-06-11 2023-04-28 华南理工大学 Night image coloring method, device, medium and equipment
CN216568562U (en) * 2021-10-15 2022-05-24 北京红翼前锋科技发展有限公司 Multifunctional intelligent helmet
CN114912536A (en) * 2022-05-26 2022-08-16 成都恒安警用装备制造有限公司 Target identification method based on radar and double photoelectricity

Also Published As

Publication number Publication date
CN115100556A (en) 2022-09-23

Similar Documents

Publication Publication Date Title
EP0932114B1 (en) A method of and apparatus for detecting a face-like region
WO2018076732A1 (en) Method and apparatus for merging infrared image and visible light image
US20180081434A1 (en) Eye and Head Tracking
JP4328286B2 (en) Face area estimation device, face area estimation method, and face area estimation program
CN107992857A (en) A kind of high-temperature steam leakage automatic detecting recognition methods and identifying system
CN109754377A (en) A kind of more exposure image fusion methods
JP2000082147A (en) Method for detecting human face and device therefor and observer tracking display
WO2014044126A1 (en) Coordinate acquisition device, system and method for real-time 3d reconstruction, and stereoscopic interactive device
WO2010124497A1 (en) Method, device and system for motion detection
CN111144207B (en) Human body detection and tracking method based on multi-mode information perception
CN103902953B (en) A kind of screen detecting system and method
CN115100556B (en) Augmented reality method and device based on image segmentation and fusion and electronic equipment
CN110796032A (en) Video fence based on human body posture assessment and early warning method
CN109886195B (en) Skin identification method based on near-infrared monochromatic gray-scale image of depth camera
CN114114312A (en) Three-dimensional target detection method based on fusion of multi-focal-length camera and laser radar
CN113573035A (en) AR-HUD brightness self-adaptive adjusting method based on vision
CN110909571B (en) High-precision face recognition space positioning method
KR102347226B1 (en) Fusion sensor data visualization device and method
CN113762161A (en) Intelligent obstacle monitoring method and system
Shi et al. A method for detecting pedestrian height and distance based on monocular vision technology
JPH05215547A (en) Method for determining corresponding points between stereo images
CN113988957B (en) Automatic image scoring method and system based on element recognition
CN115937776A (en) Monitoring method, device, system, electronic equipment and computer readable storage medium
CN111833384B (en) Method and device for rapidly registering visible light and infrared images
JP7092616B2 (en) Object detection device, object detection method, and object detection program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant