CN115371815A - Image display processing method and device and electronic equipment - Google Patents

Image display processing method and device and electronic equipment Download PDF

Info

Publication number
CN115371815A
CN115371815A CN202211065380.8A CN202211065380A CN115371815A CN 115371815 A CN115371815 A CN 115371815A CN 202211065380 A CN202211065380 A CN 202211065380A CN 115371815 A CN115371815 A CN 115371815A
Authority
CN
China
Prior art keywords
image
target area
displaying
determining
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211065380.8A
Other languages
Chinese (zh)
Inventor
顾晨辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Micro Image Software Co ltd
Original Assignee
Hangzhou Micro Image Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Micro Image Software Co ltd filed Critical Hangzhou Micro Image Software Co ltd
Priority to CN202211065380.8A priority Critical patent/CN115371815A/en
Publication of CN115371815A publication Critical patent/CN115371815A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/02Constructional details
    • G01J5/03Arrangements for indicating or recording specially adapted for radiation pyrometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/48Thermography; Techniques using wholly visual means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J2005/0077Imaging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a display processing method of an image, which comprises the following steps: acquiring a thermal imaging image and a visible light image obtained by shooting the same scene; detecting an operation event aiming at a currently displayed first image in an image presentation interface; determining a selected first target area in the first image based on the detected position information of the operation event; searching a second target area corresponding to the first target area and at the same position in the second image based on the position calibration information between the first image and the second image; and displaying the image in the second target area in the second image. By applying the method and the device, the interested part can be conveniently switched and displayed among the thermal imaging image, the visible light image and the double-light fusion image, and the user experience is improved.

Description

Image display processing method and device and electronic equipment
Technical Field
The present disclosure relates to thermal imaging technologies, and in particular, to a method and an apparatus for displaying and processing an image, and an electronic device.
Background
With the continuous development and progress of infrared imaging technology, methods and systems for generating and processing thermal imaging images by using infrared imaging technology are more and more widely applied.
The infrared imaging technology is that the radiation energy of the detected object is processed by the system and converted into a thermal imaging image of the target object, and the temperature distribution condition of the detected target can be obtained through a thermal imaging map.
When a user shoots with a camera related to thermal imaging, the camera often generates a thermal imaging image, a visible light image and a dual-light fusion image for the same scene. The double-light fusion image is displayed by superposing the thermal imaging image and the visible light image. When image information needs to be checked, a thermal imaging image, a visible light image or a double-light fusion image can be selected to be checked through mode switching, namely the mode is switched to a visible light mode, and a visible light image is displayed on an image presentation interface; switching to a thermal imaging mode, and displaying a thermal imaging image on an image presentation interface; and switching to a fusion mode, and displaying a double-light fusion image on an image presentation interface.
The above-mentioned mode of switching different models to display visible light image, thermal imaging image and two optical fusion images is inconvenient for carrying out visual information check, for example, when a point with abnormal temperature is found in the thermal imaging image, at this time, if the visible light image at the corresponding position is to be checked, what the object with abnormal temperature is clear, the existing scheme needs to be switched to the visible light image for checking, and the position corresponding to the point with abnormal temperature of the thermal imaging image needs to be searched on the visible light image by itself, the search result is easy to make mistakes, and the position does not correspond to the point. If there are a plurality of such phenomena in a thermal imaging image, the process of switching back and forth is more troublesome, the corresponding position cannot be found accurately, and the user experience is poor. In the case of the dual optical fusion image, although two images can be simultaneously displayed, the visible light portion and the thermal imaging portion cannot be particularly clearly distinguished due to the superimposed display of the images.
Disclosure of Invention
The application provides an image display processing method and device and electronic equipment, which can conveniently switch and display interested parts among a thermal imaging image, a visible light image and a dual-light fusion image, and improve user experience.
In order to achieve the purpose, the following technical scheme is adopted in the application:
a display processing method of an image comprises the following steps:
acquiring a thermal imaging image and a visible light image obtained by shooting the same scene;
detecting an operation event aiming at a currently displayed first image in an image presentation interface; wherein the first image is the thermal imaging image, the visible light image or a dual-light fusion image generated by superposing the thermal imaging image and the visible light image;
determining a selected first target area in the first image based on the detected position information of the operation event;
searching a second target area corresponding to the first target area and at the same position in the second image based on the position calibration information between the first image and the second image; the second image is another image except the first image in the thermal imaging image, the visible light image and the double-light fusion image, and the first position calibration information comprises auxiliary information for ensuring that pixel points at the same corresponding positions in the first image and the second image are overlapped;
and displaying the image in the second target area in the second image.
Preferably, after determining the first target area, the method further comprises:
searching a third target area corresponding to the first target area and at the same position in a third image based on second position calibration information between the first image and the third image; the second position calibration information comprises auxiliary information which ensures that pixel points at the corresponding same positions in the first image and the third image are overlapped;
and displaying the image in the third target area in the third image.
Preferably, the first position calibration information includes a first reference point coordinate, an image size of an image where the first reference point is located, and a relative angle between the first image and the second image when pixel points corresponding to the same position in the thermal imaging image, the first image, and the second image are ensured to be overlapped;
the first reference point coordinate is a pixel point coordinate of a corresponding position of a first reference point on any one of the first image and the second image on the other image;
the second position calibration information comprises a second reference point coordinate, an image size of an image where the second reference point is located, and a relative angle between the first image and the third image when pixel points at the corresponding same positions in the first image and the third image are enabled to be coincident;
the second reference point coordinate is a pixel point coordinate of a corresponding position of the reference point on any one of the first image and the third image on the other image.
Preferably, the operation event comprises a click event;
the determining a selected first target area in the first image based on the detected position information of the operation event comprises:
determining the first target area based on click positions of at least three click events; wherein the boundary of the first target region passes through click positions of at least three of the click events;
alternatively, the first and second electrodes may be,
and determining the coverage area of the preset graph as the first target area by taking the click position of the click event as the center of the preset graph.
Preferably, the operational event comprises a line tracing event;
the determining a selected first target area in the first image based on the detected position information of the operation event comprises:
determining the first target area based on the track information of the line tracing event;
wherein the boundary of the first target region coincides with the trajectory information.
Preferably, the processing of determining the selected first target area in the first image is performed after detecting a preset end operation.
Preferably, the searching for the second target area corresponding to the first target area position in the second image based on the first position calibration information includes:
determining pixel points of corresponding positions of all the pixel points in the first target area in the second image based on the first position calibration information to form a second target area;
the searching for a third target area corresponding to the first target area position in the third image based on the second position calibration information includes:
and determining pixel points of corresponding positions of all the pixel points in the first target area in the third image based on the second position calibration information to form the third target area.
Preferably, the displaying the image in the second target region in the second image includes:
displaying an image within the second target area at the location of the first target area; alternatively, the first and second electrodes may be,
displaying the image in the second target area at a preset position of the image presentation interface; alternatively, the first and second electrodes may be,
displaying the image in the second target area on another image presentation interface different from the image presentation interface;
the displaying the image in the third target region in the third image comprises:
displaying an image within the third target area at the location of the first target area; alternatively, the first and second electrodes may be,
displaying the image in the third target area on a preset position of the image presentation interface; alternatively, the first and second liquid crystal display panels may be,
displaying the image in the third target area on another image presentation interface different from the image presentation interface.
Preferably, the displaying the image in the second target area at the position of the first target area comprises:
adding a layer on the position of the first target area for displaying the image in the second target area; or, at the position of the first target area, replacing the value of the corresponding pixel point in the first target area with the value of each pixel point in the second target area;
the displaying the image in the third target area at the position of the first target area comprises:
adding a layer on the position of the first target area for displaying the image in the third target area; or, at the position of the first target area, replacing the value of the corresponding pixel point in the first target area with the value of each pixel point in the third target area.
Preferably, the other image interface is established by a pop-up window method.
An apparatus for display processing of an image, comprising: the device comprises an acquisition unit, an operation event detection unit, a target area dividing unit and a display unit;
the acquisition unit is used for acquiring a thermal imaging image and a visible light image which are obtained by shooting the same scene;
the operation event detection unit is used for detecting an operation event aiming at a currently displayed first image in an image presentation interface; wherein the first image is the thermal imaging image, the visible light image or a dual-light fusion image generated by superposing the thermal imaging image and the visible light image;
the target area delimiting unit is used for determining a selected first target area in the first image based on the detected position information of the operation event; the first image and the second image are used for obtaining first position calibration information, and the first position calibration information is used for searching a first target area corresponding to the first target area in the first image; the second image is another image except the first image in the thermal imaging image, the visible light image and the double-light fusion image, and the first position calibration information comprises auxiliary information for ensuring that pixel points at the same corresponding positions in the first image and the second image are overlapped;
the display unit is used for displaying the image in the second target area in the second image.
Preferably, after the target area delineation unit determines the first target area, the target area delineation unit is further configured to search a third target area corresponding to the same position as the first target area in a third image based on that the first position delineation information includes auxiliary information position delineation information for ensuring that pixel points corresponding to the same position in the first image and the second image coincide; the second position calibration information comprises auxiliary information which ensures that pixel points at the corresponding same positions in the first image and the third image are overlapped;
the display unit is further configured to display an image in the third target region in the third image.
Preferably, the first position calibration information includes a first reference point coordinate, an image size of an image where the first reference point is located, and a relative angle between the first image and the second image when pixels corresponding to the same position in the first image and the second image are ensured to be overlapped;
the first reference point coordinate is a pixel point coordinate of a corresponding position of a reference point on any one of the first image and the second image on the other image;
the second position calibration information comprises a second reference point coordinate, an image size of an image where the second reference point is located, and a relative angle between the first image and the third image when pixel points at the corresponding same positions in the first image and the third image are enabled to be coincident;
the second reference point coordinate is a pixel point coordinate of a corresponding position of the reference point on any one of the first image and the third image on the other image.
Preferably, the operation event comprises a click event;
in the operation event detection unit, determining a selected first target region in the first image based on the detected position information of the operation event includes:
determining the first target area based on click positions of at least three click events; wherein the boundary of the first target region passes through click positions of at least three of the click events;
alternatively, the first and second liquid crystal display panels may be,
and determining the coverage area of the preset graph as the first target area by taking the click position of the click event as the center of the preset graph.
Preferably, the first and second air flow paths are arranged in parallel,
the operational event comprises a line tracing event;
in the operation event detection unit, the determining a selected first target region in the first image based on the detected position information of the operation event includes:
determining the first target area based on the track information of the line tracing event;
wherein the boundary of the first target region coincides with the trajectory information.
In the operation event detection unit, after a preset end operation is detected, processing of determining a selected first target region in the first image is performed.
Preferably, in the target area defining unit, the searching for the second target area corresponding to the first target area position in the second image based on the first position calibration information includes:
determining pixel points of corresponding positions of all the pixel points in the first target area in the second image based on the first position calibration information to form a second target area;
in the target area delimiting unit, the searching for a third target area corresponding to the first target area position in the third image based on the second position specifying information includes:
and determining pixel points of corresponding positions of all the pixel points in the first target area in the third image based on the second position calibration information to form the third target area.
Preferably, in the display unit, the displaying the image in the second target area in the second image includes:
displaying an image within the second target area at the location of the first target area; alternatively, the first and second liquid crystal display panels may be,
displaying the image in the second target area at a preset position of the image presentation interface; alternatively, the first and second electrodes may be,
displaying the image in the second target area on another image presentation interface different from the image presentation interface;
the displaying, in the display unit, an image within the third target region in the third image, including:
displaying an image within the third target area at the location of the first target area; alternatively, the first and second liquid crystal display panels may be,
displaying the image in the third target area at a preset position of the image presentation interface; alternatively, the first and second electrodes may be,
displaying the image in the third target area on another image presentation interface different from the image presentation interface.
Preferably, in the display unit, the displaying the image in the second target region at the position of the first target region includes:
adding a layer on the position of the first target area for displaying the image in the second target area; alternatively, the first and second electrodes may be,
replacing the value of the corresponding pixel point in the first target area with the value of each pixel point in the second target area at the position of the first target area;
in the display unit, the displaying an image within the third target region at the position of the first target region includes:
adding a layer on the position of the first target area for displaying the image in the third target area; or, at the position of the first target area, replacing the value of the corresponding pixel point in the first target area with the value of each pixel point in the third target area.
Preferably, in the display unit, the another image interface is established by a pop-up window method.
The present application further provides an electronic device comprising at least a computer-readable storage medium, and a processor;
the processor is configured to read the executable instructions from the computer-readable storage medium and execute the instructions to implement the image display method according to any one of the above descriptions.
According to the technical scheme, the thermal imaging image and the visible light image which are obtained by shooting the same scene are obtained; detecting an operation event aiming at a currently displayed first image in an image presentation interface; the first image is a thermal imaging image, a visible light image or a double-light fusion image generated by superposing the thermal imaging image and the visible light image; determining a selected first target area in the first image based on the position information of the detected operation event; searching a second target area corresponding to the position of the first target area in the second image based on the first position calibration information; wherein the second image is another image except the first image in the thermal imaging image, the visible light image and the double-light fusion image; and displaying the image in the second target area in the second image. Through the mode, on one hand, the auxiliary information which ensures superposition of corresponding positions in the first image and the second image is marked by using the position calibration information, on the other hand, an interested first target area in the first image is conveniently determined through detection of an operation event, and then a second target area which corresponds to the first target area in position is searched in the second image according to the calibration information, so that a position area corresponding to an interested part is automatically found out on the other image for display, and user experience is improved.
Drawings
FIG. 1 is a schematic diagram of a basic flow of an image display method according to the present application;
FIG. 2 is a flowchart illustrating an image display method according to an embodiment of the present disclosure;
FIG. 3 is an exemplary diagram of position marker information;
FIG. 4 is a functional start-up diagram of image fusion;
FIG. 5 is an exemplary illustration of selecting a first target region in an embodiment of the present application;
FIG. 6 is a second exemplary illustration of selecting a first target area in an embodiment of the present application;
FIG. 7 is a third exemplary diagram of selecting a first target area in an embodiment of the present application;
FIG. 8 is a schematic diagram of a basic structure of an image display apparatus according to the present application;
fig. 9 is a schematic diagram of a basic structure of an electronic device in the present application.
Detailed Description
For the purpose of making the objects, technical means and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings.
In the application, in any image of the thermal imaging image, the visible light image and the double-light fusion image, the first target area on the display image is determined by detecting the operation event, and then the second target area corresponding to the first target area on the other image is determined and displayed based on the position calibration information, so that the interested part can be conveniently switched and displayed among the thermal imaging image, the visible light image and the double-light fusion image, and the user experience is improved.
Fig. 1 is a basic flowchart of an image display method in the present application. As shown in fig. 1, the method includes:
step 101, acquiring a thermal imaging image and a visible light image obtained by shooting the same scene.
The thermal imaging related camera can generate a thermal imaging image and a visible light image simultaneously after shooting the same scene, and the shooting is usually carried out by two different cameras at close positions simultaneously to generate the thermal imaging image and the visible light image respectively. Due to the difference in resolution and position of the two cameras, the size and the fine image angle of the generated thermal imaging image and the visible light image can be different. Based on this, initial position calibration information between the thermal imaging image and the visible light image can be obtained, the position calibration information can include auxiliary information for ensuring that corresponding positions in the thermal imaging image and the visible light image are coincident, the initial position calibration information can be used for ensuring that pixel points corresponding to the same position in the thermal imaging image and the visible light image are coincident through angle and position correction, and the initial position calibration information is usually fixedly arranged in camera related information when the thermal imaging camera leaves a factory.
In addition, the acquired thermal imaging image and the visible light image are superposed together, and a dual-light fusion image can be generated, wherein when the thermal imaging image and the visible light image are superposed, pixel points corresponding to the same position need to be superposed together, and the superposition can be realized through initial position calibration information. The generated double-light fusion image is generated by superposing the thermal imaging image and the visible light image, so that the superposition of the double-light fusion image, the thermal imaging image and the pixel points of the visible light image corresponding to the same position can be ensured through initial position calibration information.
Step 102, detecting an operation event aiming at a first image displayed currently in an image presentation interface.
In the application, a thermal imaging image may be displayed first, an interested region is selected on the thermal imaging image, and then a visible light image or a dual-light fusion image of a corresponding region is displayed, or a visible light image may be displayed first, an interested region is selected on the visible light image, and then a thermal imaging image or a dual-light fusion image of a corresponding region is displayed, or a dual-light fusion image may be displayed first, and an interested region is selected on the dual-light fusion image, and then a thermal imaging image or a visible light image of a corresponding region is displayed. For convenience of description, in the thermal imaging image, the visible light image, and the dual optical fusion image, an image to be displayed first on the image presentation interface is referred to as a first image, and an object image to be displayed is referred to as a second image.
For example, when the subject state at an abnormal temperature is focused, the thermal imaging image may be taken as the first image, and the visible light image may be taken as the second image; when temperature information of a specified subject is focused, a visible light image may be used as the first image and a thermal imaging image may be used as the second image. In particular, options can be provided in the interface for the user to select the first image and the second image, or default first images and second images of the system can be adopted.
For a presented first image, an operational event is detected for the first image. Wherein the operation event may be an operation event triggered by a user for selecting the region of interest.
Step 103, determining a selected first target area in the first image based on the position information of the detected operation event.
The operation event corresponds to corresponding position information, based on which a region calibrated by the position information can be determined, and since the operation event is performed on the first image, the region calibrated by the position information is mapped onto the first image, so that a region in the first image, which is called a first target region, can be obtained, and the first target region is the region selected by the operation event. For example, a frame selection operation may be performed on the first image (e.g., a thermal imaging image) by using a device such as a mouse, and after the frame selection operation is detected, the selected first target area is determined on the first image according to the position of the frame selection operation.
And 104, searching a second target area corresponding to the first target area in the second image based on the first position calibration information.
In step 101, the initial position calibration information is utilized to enable the pixels corresponding to the same position in the thermal imaging image, the visible light image and the dual-light fusion image to be coincident through angle and position correction. In the application, the meaning of the initial position calibration information is referred to, the position calibration information between the first image and the second image, which is referred to as the first position calibration information for short, is introduced, and the superposition of pixel points corresponding to the same position in the first image and the second image is ensured. Specifically, the first position calibration information may be obtained according to the initial position calibration information, or may be obtained by comparing a first image and a second image of the same scene in advance, for example, the first image (for example, a thermal imaging image) and the second image (for example, a visible light image or a dual-light fusion image) captured in the same scene may be manually dragged in advance, so that pixel points corresponding to the same position in the two images are overlapped, and the position calibration information between the two dragged images is recorded as the first position calibration information.
Based on the first position calibration information, for each pixel point in the first target area, pixel points corresponding to the same position are determined in the second image to form a second target area. Therefore, the pixel points of the second target area and the pixel points of the first target area respectively correspond to the same position, and the superposition is realized.
Step 105, displaying the image in the second target area in the second image.
After the second target area is determined, the image in the second area may be displayed for the user.
Through the processing of the steps, on one hand, a first target area of interest in the first image is determined through the detection of the operation event, so that the interested part is conveniently determined; on the other hand, based on the position calibration information, a second target area corresponding to the first target area and having the same position is automatically found in the second image, so that an image area corresponding to the interested part in another image is automatically and accurately determined. In this way, it is possible to switch the display of the portion of interest between the thermographic image and the visible light image with ease.
The basic method flow shown in fig. 1 ends up. On the basis of the flow shown in fig. 1, after step 103, the method may further include:
step 104a, searching a third target area corresponding to the same position as the first target area in a third image based on the second position calibration information;
wherein the third image is another image except the first image and the second image in the thermal imaging image, the visible light image and the double-light fusion image. The second position calibration information includes auxiliary information for ensuring that pixel points corresponding to the same position in the first image and the third image are overlapped. The second position calibration information is position calibration information between the first image and the third image, and the obtaining mode thereof may be the same as the first position calibration information, and is not repeated here. In this way, based on the second position specifying information, the third target area corresponding to the same position as the first target area can be specified.
Step 105a, displaying the image in the third target area in the third image.
The processing of steps 104a and 105a may be the same as steps 104 and 105, respectively, except for the processing performed for the third image. Through the above processing, the corresponding region of the other image except the first image and the second image is searched and displayed, that is, the thermal imaging image, the visible light image, and the two-photon fusion image are all displayed, one of the images is displayed as the original image, and after the first target region is selected on the image, the images of the corresponding regions on the other two images are also displayed.
Next, a specific implementation of the above-described image display method will be described by way of a specific embodiment.
Fig. 2 is a schematic flowchart of an image display method in an embodiment of the present application. In this embodiment, a thermal imaging image is taken as a first image, and a visible light image is taken as a second image. As shown in fig. 2, the specific method flow includes:
step 201, acquiring a thermal imaging image, a visible light image and first position calibration information obtained by shooting the same scene.
The thermal imaging image and the visible light image are obtained by shooting the same scene, and specifically, the pixel values of the pixel points of the obtained thermal imaging image and the obtained visible light image can be stored in the storage unit. Generally, the first position calibration information between the thermal imaging image and the visible light may be directly generated from the initial position calibration information re-recorded by the camera. The acquired first position calibration information may also be stored in the storage unit and may be stored in association with the thermal imaging image and the visible light image. The first position calibration information may be, for example, a registration relationship or a mapping relationship of the images.
Optionally, the first position calibration information may include coordinates of the first reference point, a length and a width of an image in which the reference point is located, and a relative angle between the thermal imaging image and the visible light image when corresponding positions in the thermal imaging image and the visible light image are ensured to be coincident. The first reference point coordinates are coordinates of pixel points (hereinafter referred to as first auxiliary reference points) of the reference points on either the thermal imaging image or the visible light image, which correspond to the same position on the other image. In general, a boundary pixel point or a preset certain pixel point in an image with a smaller size in the thermal imaging image and the visible light image may be used as the first reference point.
For example, generally, the size of the thermal imaging image is smaller than that of the visible light image, the pixel point a at the upper left corner of the thermal imaging image may be used as the first reference point, and the first reference point coordinates are coordinates of the pixel point B at the same position on the visible light image corresponding to the pixel point a. Based on this, the first position calibration information may include coordinates of the pixel point B, a length and a width of the thermal imaging image, and an angle of the thermal imaging image relative to the visible light image when the pixel points at the same corresponding positions of the thermal imaging image and the visible light image are overlapped, as shown in fig. 3.
Step 202, displaying the thermal imaging image on the image presentation interface.
The thermal imaging image is rendered on the image presentation interface, and the specific processing mode can be performed by adopting the existing mode, which is not described herein again.
And step 203, starting a fusion display function of the image.
The processing of the thermal imaging image and the visible light image in the embodiment is triggered by starting a preset fusion display function. Specifically, the manner of starting the fusion display function may be to set an option (e.g., a button) on or outside the image presentation interface, and when the selection of the corresponding option is detected, start the fusion display function; alternatively, the fusion display function may be started by default when the image presentation interface is opened.
An example of the activation of the fusion function by a button is given below. As shown in fig. 4, a "fused magic mirror" button is provided on the image presentation interface, and when the display processing of the present application is required, the user can click the "fused magic mirror" button, and after detecting that the button is pressed, the system enters the display processing of the present application and starts to execute the subsequent steps.
The processing sequence of steps 202 and 203 may be reversed, i.e. step 203 is executed first, and then step 202 is executed.
In step 204, an operational event for the thermographic image in the image presentation interface is detected.
The operational event may be user-triggered, or system-automatically triggered. The operation event may include a click event or a line event, etc. For example, the operation event may be an operation event input by a mouse, a stylus pen, or the like.
Based on the position information of the detected operational event, a selected first target region in the thermographic image is determined, step 205.
The operation events correspond to corresponding position information, and the region marked by the position information of the operation events can be determined through the position information of the operation events and is used as the selected first target region in the thermal imaging image. And determining the first target area in a corresponding mode according to different types of operation events.
Specifically, when the operation event includes a click event, the target area may be determined based on click positions of at least three corresponding click events. For example, after each click event is detected, the click position of each click event is determined, the click positions are sequentially connected to form a closed region, and a part in the closed region is used as a target region, as shown in fig. 5.
When the operation event comprises a click event, the click position of the click event can be used as the center of the preset graph, and the coverage area of the preset graph is determined to be the first target area. For example, the preset graph is a circle with a radius of 5 cm, and after the click position of the click event is determined, a circular area with a radius of 5 cm is determined as the first target area by taking the click position as the center of the circle, as shown in fig. 6.
When the operation event comprises a line tracing event, the target area can be determined based on the track information of the line tracing event; wherein the boundary of the target area coincides with the trajectory information. For example, after a line-tracing event is detected, a closed region is formed based on the trace of the line-tracing event, and a part in the closed region is taken as a target region, as shown in fig. 7. The line tracing event can trace lines according to the input track completely, or trace lines according to the input mark points by adopting a preset track. For example, the mouse may be used as an input device, and the line tracing may be completely traced according to the movement track of the mouse, or all the line tracing tracks may be set to be straight lines, and the line tracing tracks may be determined according to the starting point and the ending point of the mouse click. For another example, a mouse is used as an input device, and a frame selection operation is performed by the mouse to determine the trace track, and the shape of the frame selection may be preset, for example, circular or rectangular.
In addition, an end operation may also be set in advance, and the process of determining the target area may be executed after the end operation by the user is detected. For example, when the operation event includes a click event by a mouse, the end operation may be set to a right click of the mouse, a left click of the mouse, or no detection of a mouse click operation within a set time. When the operation event includes a frame selection trace event by the mouse, the end operation may be set as a mouse up operation.
And step 206, searching a second target area corresponding to the same position as the first target area in the visible light image based on the first position calibration information.
When the second target area is searched, the pixel points corresponding to the same position in the second image can be found for each pixel point in the first target area based on the first position calibration information to form the second target area.
When the first reference point is located in the thermal imaging image, optionally, for any pixel point X in the first target region, determining a relative positional relationship T between the pixel point X and the first reference point (for example, the pixel point a); and then, based on the position calibration information, determining a first auxiliary reference point (for example, the pixel point B) corresponding to the same position as the first reference point in the visible light image, and finding a pixel point Y having a position relationship T with the first auxiliary reference point as a pixel point corresponding to the same position as the pixel point X.
When the first reference point is located in the visible light image, optionally, in the thermal imaging image, determining a first auxiliary reference point (for example, the aforementioned pixel point B) corresponding to the reference point at the same position, and for any pixel point X in the first target region, determining a relative positional relationship T between the pixel point X and the first auxiliary reference point (for example, the aforementioned pixel point B); based on the position calibration information, a pixel point Y having a position relationship T with the first reference point (e.g., the pixel point a) is found in the visible light image, and is used as a pixel point corresponding to the same position as the pixel point X.
The above is an example in which the thermal imaging image is the first image and the visible light image is the second image in the present embodiment. The corresponding mode is adopted for the case that the visible light image is the first image, the thermal imaging image is the second image or other cases. To summarize, the process of determining the pixels corresponding to the same position may be performed as follows:
1. when the first reference point is located in the first image, optionally, for any pixel point X in the first target region, determining a relative position relationship T between the pixel point X and the first reference point; based on the position calibration information, determining a first auxiliary reference point corresponding to the same position as the first reference point in the second image, and finding a pixel point Y having a position relation T with the first auxiliary reference point as a pixel point corresponding to the same position as the pixel point X;
2. when the first reference point is located in the second image, optionally, in the first image, determining a first auxiliary reference point corresponding to the same position as the first reference point, and for any pixel point X in the first target region, determining a relative position relationship T between the pixel point X and the first auxiliary reference point; and based on the first position calibration information, finding a pixel point Y having a position relation T with the first reference point in the second image as a pixel point corresponding to the same position as the pixel point X.
In this embodiment, only the case where the third target region corresponding to the same position as the first target region is searched for and displayed on the second image will be discussed. In practice, for the case that a third target region corresponding to the same position as the first target region needs to be searched for and displayed on the third image, the second position calibration information may be similar to the first position calibration information, and a registration relationship or a mapping relationship between the images is adopted, which specifically includes: the coordinates of the second reference points, the image size of the image where the second reference points are located and the relative angle between the first image and the third image when the pixel points at the corresponding same positions in the first image and the third image are coincident are ensured; the second reference point coordinate is a pixel point coordinate of a corresponding position of the reference point on any one of the first image and the third image on the other image. Specifically, the process of determining the pixel points corresponding to the same position according to the second position calibration information is the same as the process of determining the pixel points corresponding to the same position according to the first position calibration information, except that the first reference point is replaced by the second reference point, and the first auxiliary reference point is replaced by the second auxiliary reference point, wherein the second reference point coordinate is the coordinate of the pixel point (i.e., the second auxiliary reference point) corresponding to the same position on the other image of the reference point on any one of the first image and the third image.
The second position calibration information may also be obtained from the initial position calibration information, or may be set by manual dragging.
By the method, the pixel points corresponding to the same position in the second image are determined aiming at the pixel points of the first target area, the determined pixel points form the second target area, and the second target area is the image area corresponding to the same position as the first target area in the second image. The method for determining the second target area in the second image can automatically discover the corresponding position area on one hand and ensure the accuracy of the corresponding area on the other hand.
Step 207, displaying the image in the second target area in the second image.
And cutting the image of the second target area from the saved second image and displaying the image.
When the image of the second target area is displayed, the following method can be adopted:
1. an image of the second target area may be displayed at the location of the first area.
Optionally, an image layer may be newly added at the position of the first target area, and is used to display an image in the second target area; the size of the layer can be consistent with that of the second target area, the layer covers the first target area, and the image of the second target area is rendered on the newly added layer;
or, optionally, the value of each pixel point in the second target region may be used to replace the value of the corresponding pixel point in the first region at the position of the first region; that is, the image of the second target area is displayed on the layer on which the thermographic image is located, and the image within the second target area is rendered on the first target area.
2. Displaying the image in the second target area at a preset position of the image presentation interface;
alternatively, an area may be opened up at a preset position on the image presentation interface of the thermographic image for displaying the image of the second target area.
3. The image within the second target area is displayed at another image presentation interface distinct from the image presentation interface.
The image within the second target region may be displayed using another image presentation interface than the image presentation interface of the thermal imaging image. Optionally, a new window may pop up, and the image in the second target region is rendered and displayed on the new window, and the popped up new window may partially or completely cover the original image presentation interface, or the popped up new window may be located outside the original image presentation interface.
In displaying the image in the second target region, the image may be displayed in an original size, or may be displayed after being enlarged or reduced at a set ratio. When the image in the second target region is displayed in an enlarged or reduced manner, the size of the display area occupied by the second target region can be adaptively increased or decreased when the image in the second target region is displayed at the position of the first target region.
Meanwhile, the image display processing may be implemented on various electronic devices, such as a computer, a mobile phone, and the like, and the operation event is input using an input device, such as a mouse, a finger, and the like, for determining the first target area. And tracking and detecting the operation events in real time along with the update of the operation events (such as the movement of the position of the mouse) input by the input device, determining the latest first target area and the corresponding second target area based on the current updated operation events, and updating and displaying the image in the latest second target area in real time. When the display withdrawing requirement exists, a preset withdrawing operation can be executed, and then the first image is displayed on the image presentation interface.
The method flow shown in fig. 2 ends up so far. Through the processing of the embodiment, the interested part can be selected through the operation event of the first image, and the image area corresponding to the interested part is automatically searched in the second image and/or the third image for displaying. On one hand, automatic searching is realized, on the other hand, the accuracy of the corresponding position area is improved, interested parts can be conveniently displayed in a switching mode among the thermal imaging image, the visible light image and the double-light fusion image, and the interested positions can be rapidly positioned through different viewing angles.
An example of implementing image display processing by the present application through an interactive scheme is given below, still taking a thermal imaging image as a first image and a visible light image as a second image as an example:
1. the user views the thermal imaging image;
2. the user clicks and selects the 'fused magic mirror' option;
3. the user moves the "fusion magic mirror" on the thermographic image by means of a mouse, finger, or the like;
4. the 'fusion magic mirror' is moved to a certain part, and the corresponding part displays the visible light image corresponding to the same position.
Through the interaction, the user can conveniently find out the visible light image picture of the region concerned by the user, so that the user can flexibly read one picture through different dimensions.
The application also provides an image display processing device which can be used for realizing the image display processing method in the application. Fig. 8 is a schematic diagram of a basic configuration of the image display processing apparatus. As shown in fig. 8, the apparatus includes: the device comprises an acquisition unit, an operation event detection unit, a target area dividing unit and a display unit;
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a thermal imaging image and a visible light image which are obtained by shooting the same scene;
the operation event detection unit is used for detecting an operation event aiming at a currently displayed first image in the image presentation interface; the first image is a thermal imaging image, a visible light image or a double-light fusion image generated by superposing the thermal imaging image and the visible light image;
a target area delimiting unit configured to determine a selected first target area in the first image based on the detected position information of the operation event; the first image is used for acquiring first position calibration information between the first image and the second image; the second image is another image except the first image in the thermal imaging image, the visible light image and the double-light fusion image, and the first position calibration information comprises auxiliary information for ensuring that pixel points at the same corresponding positions in the first image and the second image are overlapped;
and the display unit is used for displaying the image in the second target area in the second image.
Optionally, after the first target area is determined in the target area defining unit, a third target area corresponding to the first target area and having the same position as the first target area may be further searched in the third image based on the second position calibration information; and displaying the image in the third target area in the third image. The second position calibration information comprises auxiliary information which ensures that pixel points at the same corresponding positions in the first image and the third image are overlapped.
Optionally, the first position calibration information includes coordinates of the first reference point, an image size of an image where the first reference point is located, and a relative angle between the first image and the second image when pixels at the same corresponding positions in the first image and the second image are ensured to coincide; the first reference point coordinate is a pixel point coordinate of a corresponding position of a reference point on any one of the first image and the second image on the other image;
the second position calibration information comprises a second reference point coordinate, the image size of the image where the second reference point is located, and a relative angle between the first image and the third image when pixel points at the corresponding same positions in the first image and the third image are enabled to be coincident;
the second reference point coordinate is a pixel point coordinate of a corresponding position of the reference point on any one of the first image and the third image on the other image.
Optionally, the operational event comprises a click event;
in the operation event detecting unit, determining the selected first target region in the first image based on the position information of the detected operation event may specifically include:
determining the first target area based on click positions of at least three click events; wherein the boundary of the first target region passes through the click positions of at least three click events;
alternatively, the first and second electrodes may be,
and determining the coverage area of the preset graph as a first target area by taking the click position of the click event as the center of the preset graph.
Optionally, the operational event comprises a line event;
in the operation event detecting unit, determining the selected first target region in the first image based on the position information of the detected operation event may specifically include:
determining a first target area based on the track information of the line tracing event;
wherein the boundary of the first target area coincides with the trajectory information.
In the operation event detection unit, after a preset end operation is detected, a process of determining a selected first target region in the first image is performed.
Optionally, in the target area dividing unit, based on the first position calibration information, searching for a second target area corresponding to the first target area in the second image, which may specifically include:
and determining pixel points of corresponding positions of all the pixel points in the first target area in the second image based on the first position calibration information to form a second target area.
In the target area dividing unit, based on the second position calibration information, a third target area corresponding to the first target area position is searched in the third image, which may specifically include: and determining pixel points of corresponding positions of all the pixel points in the first target area in the third image based on the second position calibration information to form a third target area.
Optionally, displaying, in the display unit, an image in the second target region in the second image may specifically include:
displaying an image within the second target area at the position of the first area; alternatively, the first and second electrodes may be,
displaying the image in the second target area at a preset position of the image presentation interface; alternatively, the first and second electrodes may be,
the image within the second target area is displayed at another image presentation interface distinct from the image presentation interface.
Displaying, in the display unit, an image in a third target region in the third image, which may specifically include:
displaying an image within a third target area at a position of the first target area; alternatively, the first and second electrodes may be,
displaying the image in the third target area on a preset position of the image presentation interface; alternatively, the first and second electrodes may be,
and displaying the image in the third target area on another image presentation interface different from the image presentation interface.
Optionally, in the display unit, the image in the second target region is displayed at the position of the first target region, and the specific processing may include:
adding a layer at the position of the first target area for displaying an image in the second target area; alternatively, the first and second electrodes may be,
and replacing the value of the corresponding pixel point in the first target area by the value of each pixel point in the second target area at the position of the first target area.
In the display unit, the image in the third target region is displayed at the position of the first target region, and the specific processing may include:
adding a layer on the position of the first target area for displaying an image in a third target area; alternatively, the first and second electrodes may be,
and replacing the value of the corresponding pixel point in the first target area by the value of each pixel point in the third target area at the position of the first target area.
Optionally, in the display unit, the other image interface is established by a pop-up window method.
Fig. 9 is an electronic device according to still another embodiment of the present disclosure. As shown in fig. 9, a schematic structural diagram of an electronic device according to an embodiment of the present application is shown, specifically:
the electronic device may include a processor 901 of one or more processing cores, memory 902 of one or more computer-readable storage media, and a computer program stored on the memory and executable on the processor. The above-described image display processing method can be realized when the program of the memory 902 is executed.
Specifically, in practical applications, the electronic device may further include a power supply 903, an input/output unit 904, and the like. Those skilled in the art will appreciate that the configuration of the electronic device shown in fig. 9 is not intended to be limiting of the electronic device and may include more or fewer components than shown, or some components in combination, or a different arrangement of components. Wherein:
the processor 901 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the server and processes data by running or executing software programs and/or modules stored in the memory 902 and calling data stored in the memory 902, thereby performing overall monitoring of the electronic device.
The memory 902 may be used to store software programs and modules, i.e., the computer-readable storage media described above. The processor 901 executes various functional applications and data processing by executing software programs and modules stored in the memory 902. The memory 902 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the server, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 902 may also include a memory controller to provide the processor 901 access to the memory 902.
The electronic device further includes a power supply 903 for supplying power to each component, and the power supply 903 may be logically connected to the processor 901 through a power management system, so that functions of managing charging, discharging, power consumption, and the like are implemented through the power management system. The power supply 903 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may also include an input-output unit 904, the input-unit output 904 operable to receive entered numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. The input unit output 904 may also be used to display information input by or provided to the user as well as various graphical user interfaces, which may be composed of graphics, text, icons, video, and any combination thereof.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (13)

1. A method for processing an image for display, comprising:
acquiring a thermal imaging image and a visible light image obtained by shooting the same scene;
detecting an operation event aiming at a currently displayed first image in an image presentation interface; wherein the first image is the thermal imaging image, the visible light image or a dual-light fusion image generated by superposing the thermal imaging image and the visible light image;
determining a selected first target area in the first image based on the detected position information of the operation event;
searching a second target area corresponding to the same position as the first target area in a second image based on first position calibration information between the first image and the second image; the second image is another image except the first image in the thermal imaging image, the visible light image and the double-light fusion image, and the first position calibration information comprises auxiliary information for ensuring that pixel points at the same corresponding positions in the first image and the second image are overlapped;
and displaying the image in the second target area in the second image.
2. The method of claim 1, wherein after determining the first target region, the method further comprises:
searching a third target area corresponding to the first target area and at the same position in a third image based on second position calibration information between the first image and the third image; the second position calibration information comprises auxiliary information which ensures that pixel points at the corresponding same positions in the first image and the third image are overlapped;
and displaying the image in the third target area in the third image.
3. The method according to claim 1 or 2, wherein the first position calibration information comprises coordinates of a first reference point, an image size of an image where the first reference point is located, and a relative angle between the first image and the second image when pixels at corresponding same positions in the first image and the second image are ensured to be overlapped;
the first reference point coordinate is a pixel point coordinate of a corresponding position of a first reference point on any one of the first image and the second image on the other image;
the second position calibration information comprises a second reference point coordinate, an image size of an image where the second reference point is located, and a relative angle between the first image and the third image when pixel points at the corresponding same positions in the first image and the third image are enabled to be coincident;
the second reference point coordinate is a pixel point coordinate of a corresponding position of the reference point on any one of the first image and the third image on the other image.
4. The method of claim 1, wherein the operational event comprises a click event;
the determining a selected first target area in the first image based on the detected position information of the operation event comprises:
determining the first target area based on click positions of at least three click events; wherein the boundary of the first target region passes through click positions of at least three of the click events;
alternatively, the first and second electrodes may be,
and determining the coverage area of the preset graph as the first target area by taking the click position of the click event as the center of the preset graph.
5. The method of claim 1,
the operational event comprises a line tracing event;
the determining a selected first target area in the first image based on the detected position information of the operation event comprises:
determining the first target area based on the track information of the line tracing event;
wherein the boundary of the first target region coincides with the trajectory information.
6. The method according to claim 4, wherein the process of determining the selected first target region in the first image is performed after a preset end operation is detected.
7. The method according to claim 1 or 2, wherein the searching for the second target area corresponding to the first target area position in the second image based on the first position calibration information comprises:
determining pixel points of corresponding positions of all the pixel points in the first target area in the second image based on the first position calibration information to form a second target area;
the searching for a third target area corresponding to the first target area position in the third image based on the second position calibration information includes:
and determining pixel points of corresponding positions of all the pixel points in the first target area in the third image based on the second position calibration information to form the third target area.
8. The method of claim 1 or 2, wherein said displaying the image in the second target area in the second image comprises:
displaying an image within the second target area at the location of the first target area; alternatively, the first and second electrodes may be,
displaying the image in the second target area at a preset position of the image presentation interface; alternatively, the first and second electrodes may be,
displaying the image in the second target area on another image presentation interface different from the image presentation interface;
the displaying the image within the third target region in the third image comprises:
displaying an image within the third target area at the location of the first target area; alternatively, the first and second electrodes may be,
displaying the image in the third target area on a preset position of the image presentation interface; alternatively, the first and second electrodes may be,
displaying the image in the third target area on another image presentation interface different from the image presentation interface.
9. The method of claim 8, wherein said displaying the image within the second target region at the location of the first target region comprises:
adding a layer at the position of the first target area for displaying an image in the second target area; or, at the position of the first target area, replacing the value of the corresponding pixel point in the first target area with the value of each pixel point in the second target area;
the displaying the image in the third target area at the position of the first target area comprises:
adding a layer on the position of the first target area for displaying the image in the third target area; or, at the position of the first target area, replacing the value of the corresponding pixel point in the first target area with the value of each pixel point in the third target area.
10. The method of claim 8, wherein the other graphical interface is established by pop-up windowing.
11. An apparatus for processing image display, comprising: the device comprises an acquisition unit, an operation event detection unit, a target area dividing unit and a display unit;
the acquisition unit is used for acquiring a thermal imaging image and a visible light image which are obtained by shooting the same scene;
the operation event detection unit is used for detecting an operation event aiming at a currently displayed first image in the image presentation interface; wherein the first image is the thermal imaging image, the visible light image or a dual-light fusion image generated by superposing the thermal imaging image and the visible light image;
the target area delimiting unit is used for determining a selected first target area in the first image based on the detected position information of the operation event; the first image and the second image are used for obtaining first position calibration information, and the first position calibration information is used for searching a first target area corresponding to the first target area in the first image; the second image is another image except the first image in the thermal imaging image, the visible light image and the double-light fusion image, and the first position calibration information comprises auxiliary information for ensuring that pixel points at the same corresponding positions in the first image and the second image are overlapped;
the display unit is used for displaying the image in the second target area in the second image.
12. The apparatus according to claim 11, wherein the target area defining unit, after determining the first target area, is further configured to search a third target area corresponding to the same position as the first target area in a third image based on second position calibration information between the first image and the third image; the second position calibration information comprises auxiliary information which ensures that pixel points at the corresponding same positions in the first image and the third image are overlapped;
the display unit is further used for displaying an image in the third target area in the third image;
if the operation event comprises a click event, then:
in the operation event detection unit, determining a selected first target region in the first image based on the detected position information of the operation event includes:
determining the first target area based on click positions of at least three click events; wherein the boundary of the first target region passes through click positions of at least three of the click events; or, taking the click position of the click event as the center of a preset graph, and determining the coverage area of the preset graph as the first target area;
if the operation event comprises a line tracing event, then:
in the operation event detection unit, the determining a selected first target region in the first image based on the detected position information of the operation event includes:
determining the first target area based on the track information of the line tracing event;
wherein the boundary of the first target area coincides with the trajectory information;
in the operation event detection unit, after a preset end operation is detected, executing processing for determining a selected first target area in the first image;
in the target area defining unit, the searching for the second target area corresponding to the first target area position in the second image based on the first position calibration information includes:
determining pixel points of corresponding positions of all the pixel points in the first target area in the second image based on the first position calibration information to form a second target area;
in the target area delimiting unit, the searching for a third target area corresponding to the first target area position in the third image based on the second position specifying information includes:
determining pixel points of corresponding positions of all the pixel points in the first target area in the third image based on the second position calibration information to form a third target area;
in the display unit, the displaying an image in the second target area in the second image includes:
displaying an image within the second target area at the location of the first target area; alternatively, the first and second electrodes may be,
displaying the image in the second target area on a preset position of the image presentation interface; alternatively, the first and second electrodes may be,
displaying the image in the second target area on another image presentation interface different from the image presentation interface;
the displaying, in the display unit, an image within the third target area in the third image, including:
displaying an image within the third target area at the location of the first target area; alternatively, the first and second electrodes may be,
displaying the image in the third target area at a preset position of the image presentation interface; alternatively, the first and second electrodes may be,
displaying the image in the third target area on another image presentation interface different from the image presentation interface;
in the display unit, the displaying an image within the second target region at a position of the first target region includes:
adding a layer on the position of the first target area for displaying the image in the second target area; or, at the position of the first target area, replacing the value of the corresponding pixel point in the first target area with the value of each pixel point in the second target area;
in the display unit, the displaying an image within the third target region at the position of the first target region includes:
adding a layer on the position of the first target area for displaying the image in the third target area; or, at the position of the first target area, replacing the value of the corresponding pixel point in the first target area with the value of each pixel point in the third target area.
13. An electronic device, comprising at least a computer-readable storage medium, and further comprising a processor;
the processor is configured to read the executable instructions from the computer-readable storage medium and execute the instructions to implement the image display method according to any one of claims 1 to 10.
CN202211065380.8A 2022-08-31 2022-08-31 Image display processing method and device and electronic equipment Pending CN115371815A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211065380.8A CN115371815A (en) 2022-08-31 2022-08-31 Image display processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211065380.8A CN115371815A (en) 2022-08-31 2022-08-31 Image display processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115371815A true CN115371815A (en) 2022-11-22

Family

ID=84069680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211065380.8A Pending CN115371815A (en) 2022-08-31 2022-08-31 Image display processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115371815A (en)

Similar Documents

Publication Publication Date Title
US20200388080A1 (en) Displaying content in an augmented reality system
JP2021511602A5 (en)
CN110072087B (en) Camera linkage method, device, equipment and storage medium based on 3D map
US11030808B2 (en) Generating time-delayed augmented reality content
US7420556B2 (en) Information processing method and information processing apparatus
WO2022002053A1 (en) Photography method and apparatus, and electronic device
US10585581B2 (en) Controlling display object on display screen
US20120304063A1 (en) Systems and Methods for Improving Object Detection
CN112738402B (en) Shooting method, shooting device, electronic equipment and medium
US9430806B2 (en) Electronic device and method of operating the same
CN112492215B (en) Shooting control method and device and electronic equipment
CN105912101B (en) Projection control method and electronic equipment
CN106981048B (en) Picture processing method and device
CN110737417B (en) Demonstration equipment and display control method and device of marking line of demonstration equipment
CN115097976B (en) Method, apparatus, device and storage medium for image processing
CN115371815A (en) Image display processing method and device and electronic equipment
JP2003344054A (en) Navigation apparatus, map display apparatus, and program
CN106021588B (en) Video eagle eye pattern presentation method and device
JP2006018444A (en) Image processing system and additional information indicating device
JP6747262B2 (en) User interface method, information processing apparatus, information processing system, and information processing program
US11221760B2 (en) Information processing apparatus, information processing method, and storage medium
CN114998102A (en) Image processing method and device and electronic equipment
CN112037227A (en) Video shooting method, device, equipment and storage medium
JP2020021437A (en) Device, method, and program for processing information
CN104423560A (en) Information processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination