CN114299269A - Display method, display device, display system, electronic device, and storage medium - Google Patents

Display method, display device, display system, electronic device, and storage medium Download PDF

Info

Publication number
CN114299269A
CN114299269A CN202111658477.5A CN202111658477A CN114299269A CN 114299269 A CN114299269 A CN 114299269A CN 202111658477 A CN202111658477 A CN 202111658477A CN 114299269 A CN114299269 A CN 114299269A
Authority
CN
China
Prior art keywords
image
dimensional
target
space model
dimensional space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111658477.5A
Other languages
Chinese (zh)
Inventor
谢潮贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202111658477.5A priority Critical patent/CN114299269A/en
Publication of CN114299269A publication Critical patent/CN114299269A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a display method, a display device, a display system, an electronic device and a storage medium, wherein the method is applied to the display device and comprises the following steps: receiving an image sent by a mobile terminal, wherein the image is obtained by shooting a target building; comparing the image with three-dimensional elements in the three-dimensional space model, and confirming target three-dimensional elements from the three-dimensional space model; the three-dimensional space model is constructed based on multi-frame collected images of a target building; and marking the target three-dimensional element, and displaying the marked mark on the three-dimensional space model. According to the scheme, the content of the mobile terminal can be displayed on the display equipment side, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building on the mobile terminal side can be rapidly determined on the display equipment side.

Description

Display method, display device, display system, electronic device, and storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to a display method, a display device, a display system, an electronic device, and a storage medium.
Background
The current monitoring large-screen display scene is almost a pure model, and no method is available for achieving the effect of real-time positioning and displaying of personnel, materials and events.
In the related art, the positioning method is based on UWB (Ultra Wide Band) technology, and provides a position display by a user wearing a positioning hardware device and measuring a distance through the positioning hardware device.
Disclosure of Invention
The application provides at least a display method, a display device, a display system, an electronic device, and a storage medium.
A first aspect of the present application provides a display method, which is applied to a display device, and includes: receiving an image sent by a mobile terminal, wherein the image is obtained by shooting a target building; comparing the image with three-dimensional elements in the three-dimensional space model, and confirming target three-dimensional elements from the three-dimensional space model; the three-dimensional space model is constructed based on multi-frame collected images of a target building; and marking the target three-dimensional element, and displaying the marked mark on the three-dimensional space model.
Therefore, the image is compared with the three-dimensional elements of the three-dimensional space, the elements corresponding to the image are determined, and the elements are labeled, so that the content of the mobile terminal can be displayed on the display equipment side, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building on the mobile terminal side can be rapidly determined on the display equipment side.
Wherein, comparing the image with the three-dimensional elements in the three-dimensional space model, and determining the target three-dimensional elements from the three-dimensional space model, comprises: acquiring a target area in an image; the target area is obtained by labeling a corresponding area in the image; and comparing the target area with the three-dimensional elements in the three-dimensional space model, and confirming the target three-dimensional elements from the three-dimensional space model.
Therefore, the target area in the image is compared with the three-dimensional elements in the three-dimensional space, the three-dimensional elements corresponding to the target area are determined, and the three-dimensional elements are labeled, so that the overall comparison of the image can be reduced, the load of display equipment is reduced, and the comparison efficiency is improved.
Wherein, comparing the image with the three-dimensional elements in the three-dimensional space model, and determining the target three-dimensional elements from the three-dimensional space model, comprises: extracting the features of the image to obtain feature information; and comparing the characteristic information with the three-dimensional elements in the three-dimensional space model, and confirming the target three-dimensional elements from the three-dimensional space model.
Therefore, by comparing the feature information with the three-dimensional elements in the three-dimensional space, determining the three-dimensional elements corresponding to the image and labeling the three-dimensional elements, the overall comparison of the image can be reduced, the load of the display device can be reduced, and the comparison efficiency can be improved.
Wherein, the method also comprises: receiving annotation information sent by a mobile terminal, wherein the annotation information is associated with an image; labeling the target three-dimensional element, and displaying the labeled identifier on the three-dimensional space model, wherein the labeling comprises the following steps: and marking the target three-dimensional element by using the image and the marking information, and displaying the marked mark on the three-dimensional space model.
Therefore, the content of the mobile terminal can be displayed on the display equipment side by labeling the image and the labeling information of the target three-dimensional element, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building on the mobile terminal side can be rapidly determined on the display equipment side.
Wherein, the method also comprises: in response to the identifier being selected, the image and annotation information are displayed.
Therefore, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building at the mobile terminal side can be rapidly determined at the display equipment side.
Wherein, the marking information comprises character information and voice information; in response to the identifier being selected, displaying the image and annotation information, comprising: and responding to the selected identifier, displaying the image and the text information, and playing the voice information.
Therefore, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building at the mobile terminal side can be rapidly determined at the display equipment side.
Marking the target three-dimensional element, and displaying the marked identifier on the three-dimensional space model, wherein the method comprises the following steps: and taking the image as labeling information, labeling the target three-dimensional element, and displaying a labeled identifier on the three-dimensional space model.
Therefore, the content of the mobile terminal can be displayed on the display equipment side by labeling the image on the target three-dimensional element, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building on the mobile terminal side can be rapidly determined on the display equipment side.
A second aspect of the present application provides a display method, which is applied to a mobile terminal, and includes: acquiring an image, wherein the image is obtained by shooting a target building; sending the image to a display device so that the display device can compare the image with the three-dimensional elements in the three-dimensional space model, and confirming the target three-dimensional elements from the three-dimensional space model; the three-dimensional space model is constructed based on multi-frame collected images of a target building; and marking the target three-dimensional element, and displaying the marked mark on the three-dimensional space model.
Therefore, the image is compared with the three-dimensional elements of the three-dimensional space, the elements corresponding to the image are determined, and the elements are labeled, so that the content of the mobile terminal can be displayed on the display equipment side, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building on the mobile terminal side can be rapidly determined on the display equipment side.
A third aspect of the present application provides an electronic device, which includes a processor, and a memory and a display screen coupled to the processor, wherein the processor is configured to execute program instructions stored in the memory to implement the display method in the first aspect or implement the display method in the second aspect.
A fourth aspect of the present application provides a display system, which includes a mobile terminal and a display device that are communicatively connected; the display device is configured to implement the display method in the first aspect, and the mobile terminal is configured to implement the display method in the second aspect.
A fifth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the display method of the first aspect described above, or implement the display method of the second aspect described above.
A fifth aspect of the present application provides a display apparatus, including: the receiving module is used for receiving an image sent by the mobile terminal, wherein the image is obtained by shooting a target building; the processing module is used for comparing the image with three-dimensional elements in the three-dimensional space model and confirming target three-dimensional elements from the three-dimensional space model; the three-dimensional space model is constructed based on multi-frame collected images of a target building; and the marking module is used for marking the target three-dimensional element and displaying the marked mark on the three-dimensional space model.
According to the scheme, the image is compared with the three-dimensional elements of the three-dimensional space, the elements corresponding to the image are determined, and the elements are marked, so that the content of the mobile terminal can be displayed on the display equipment side, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building on the mobile terminal side can be rapidly determined on the display equipment side.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic flow chart diagram illustrating a first embodiment of a display method provided in the present application;
FIG. 2 is a schematic diagram of an application scenario of a display method provided in the present application;
FIG. 3 is a schematic diagram of another application scenario of the display method provided in the present application;
FIG. 4 is a schematic flow chart diagram illustrating a second embodiment of a display method provided in the present application;
FIG. 5 is a schematic diagram of another application scenario of the display method provided in the present application;
FIG. 6 is a schematic flow chart diagram illustrating a display method according to a third embodiment of the present application;
FIG. 7 is a schematic flow chart diagram illustrating a fourth embodiment of a display method provided by the present application;
FIG. 8 is a schematic flow chart diagram illustrating a fifth embodiment of a display method provided in the present application;
FIG. 9 is a schematic diagram of another application scenario of the display method provided in the present application;
FIG. 10 is a schematic diagram of another application scenario of the display method provided in the present application;
FIG. 11 is a schematic structural diagram of an embodiment of a display device provided in the present application;
FIG. 12 is a schematic structural diagram of an embodiment of an electronic device provided in the present application;
FIG. 13 is a schematic diagram of an embodiment of a computer-readable storage medium provided herein;
fig. 14 is a schematic structural diagram of an embodiment of a display system provided in the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Referring to fig. 1, fig. 1 is a schematic flow chart of a display method according to a first embodiment of the present application. The method is applied to a display device and comprises the following steps:
step 11: and receiving an image sent by the mobile terminal, wherein the image is obtained by shooting the target building.
In some disclosed embodiments, the manner of receiving the image sent by the mobile terminal may be by shooting with an image capture component on the mobile terminal.
Wherein the resulting image may be photographed in response to the photographing instruction. For example, the shooting instruction requires a human trigger. In other disclosed embodiments, the shooting instruction does not need to be triggered manually, and the image can be intercepted automatically, if the image acquisition assembly is opened, the optical signal is received, and the image is obtained based on the optical signal. At this time, N frames of images can be automatically obtained, where N is a positive integer. And performing quality detection on each frame of image, and taking the frame with the best quality as the best image.
If a fire hydrant exists in the target building, an image of the fire hydrant is photographed by using the mobile terminal, and the image is transmitted to the display device. And if a picture exists in the target building, shooting an image containing the picture by using the mobile terminal, and sending the image to the display equipment.
Step 12: comparing the image with three-dimensional elements in the three-dimensional space model, and confirming target three-dimensional elements from the three-dimensional space model; the three-dimensional space model is constructed based on multi-frame collected images of the target building.
The description is made with reference to fig. 2 and 3:
as shown in fig. 2, videos of the building shown in fig. 2 are captured using an image capturing device, and then a three-dimensional spatial model of the building is obtained based on the videos, as shown in fig. 3.
The three-dimensional space model may be an image acquisition device that collects a large number of images of an environment, such as an environment image set including different time, different angles, and different positions, and recovers a sparse feature point cloud corresponding to the environment based on an sfm (structure From motion) technique, and a dense model, that is, a live-action digital model.
In other embodiments, the three-dimensional spatial model may also be constructed using SLAM (simultaneous localization and mapping) technology.
In some disclosed embodiments, the display device may have a large screen displaying the three-dimensional model.
In some disclosed embodiments, the display device may perform feature extraction on the image to obtain feature information, and then compare the feature information with a three-dimensional element in the three-dimensional space model to determine a target three-dimensional element from the three-dimensional space model.
Step 13: and marking the target three-dimensional element, and displaying the marked mark on the three-dimensional space model.
In some disclosed embodiments, the image is received while synchronously receiving the annotation information associated with the image, and the target three-dimensional element is annotated based on the annotation information.
For example, when the mobile terminal captures an image, the image is synchronously annotated at the mobile terminal, and then annotation information is synchronously received when the display device receives the image. Wherein, the marking information can be in the form of words and/or voice. And then the display equipment marks the target three-dimensional element by using the marking information.
And displaying the identification corresponding to the label on the three-dimensional space model. Such as making a corresponding identifier by using an augmented reality technology, and associating the identifier with the target three-dimensional element. Thus, the corresponding three-dimensional element on the three-dimensional space model has a corresponding identification. The user can view the corresponding annotation information through the selection identifier.
Therefore, the image is compared with the three-dimensional elements of the three-dimensional space, the elements corresponding to the image are determined, and the elements are labeled, so that the content of the mobile terminal can be displayed on the display equipment side, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building on the mobile terminal side can be rapidly determined on the display equipment side.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a display method according to a second embodiment of the present application. The method is applied to a display device and comprises the following steps:
step 41: and receiving an image sent by the mobile terminal, wherein the image is obtained by shooting the target building.
Step 41 has the same or similar technical solutions as those of the above embodiments, and is not described herein.
Step 42: acquiring a target area in an image; the target area is obtained by labeling the corresponding area in the image.
In some disclosed embodiments, a wide range of images including different real objects may be captured due to the field of view of the image capture assembly of the mobile terminal. In order to facilitate the display device to perform fast comparison, the target real object can be labeled when the mobile terminal acquires the image, so as to determine the target area in the image.
The description is made with reference to fig. 5:
as shown in fig. 5, fig. 5 includes a real object a, a real object B, a real object C, and a real object D. And marking the real object A at the mobile terminal, and determining a target area of the real object A in the image.
Step 43: and comparing the target area with the three-dimensional elements in the three-dimensional space model, and confirming the target three-dimensional elements from the three-dimensional space model.
In some disclosed embodiments, if the comparison result between the target region and the three-dimensional element in the three-dimensional space model is poor, the auxiliary comparison may be performed again using other regions in the image to obtain the target three-dimensional element.
Step 44: and marking the target three-dimensional element, and displaying the marked mark on the three-dimensional space model.
Therefore, the target area in the image is compared with the three-dimensional elements in the three-dimensional space, the three-dimensional elements corresponding to the target area are determined, and the three-dimensional elements are labeled, so that the overall comparison of the image can be reduced, the load of display equipment is reduced, and the comparison efficiency is improved.
Referring to fig. 6, fig. 6 is a schematic flow chart of a display method according to a third embodiment of the present application. The method is applied to a display device and comprises the following steps:
step 61: and receiving an image sent by the mobile terminal, wherein the image is obtained by shooting the target building.
Step 61 has the same or similar technical solutions as any of the above embodiments, and details are not described here.
Step 62: and performing feature extraction on the image to obtain feature information.
In some disclosed embodiments, the display device may have a feature extraction module, which performs feature extraction on the image by using a feature extraction model to obtain feature information.
In some disclosed embodiments, the display device is communicatively coupled to a server, and the display device may send the image to the server, so that the server performs feature extraction on the image to obtain feature information.
And step 63: and comparing the characteristic information with the three-dimensional elements in the three-dimensional space model, and confirming the target three-dimensional elements from the three-dimensional space model.
Step 64: and marking the target three-dimensional element, and displaying the marked mark on the three-dimensional space model.
Therefore, by comparing the feature information with the three-dimensional elements in the three-dimensional space, determining the three-dimensional elements corresponding to the image and labeling the three-dimensional elements, the overall comparison of the image can be reduced, the load of the display device can be reduced, and the comparison efficiency can be improved.
In some disclosed embodiments, after the target three-dimensional element is successfully labeled, a message of successful labeling can be sent to the mobile terminal. The user of the mobile terminal can follow up the work according to the message.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating a display method according to a fourth embodiment of the present application. The method is applied to a display device and comprises the following steps:
step 71: and receiving an image and annotation information sent by the mobile terminal, wherein the image is obtained by shooting a target building, and the annotation information is associated with the image.
In some disclosed embodiments, when the mobile terminal collects the image, the image may be labeled, for example, by adding text information and voice information.
In some disclosed embodiments, the annotation information can be classified, and when the image is annotated, the annotation type is set accordingly.
Step 72: comparing the image with three-dimensional elements in the three-dimensional space model, and confirming target three-dimensional elements from the three-dimensional space model; the three-dimensional space model is constructed based on multi-frame collected images of the target building.
Step 73: and marking the target three-dimensional element by using the image and the marking information, and displaying the marked mark on the three-dimensional space model.
In some disclosed embodiments, the display device displays the image and annotation information in response to either identifier being selected. In other embodiments, the display device also displays the location of the target three-dimensional element within the real building.
Therefore, the content of the mobile terminal can be displayed on the display equipment side by labeling the image and the labeling information of the target three-dimensional element, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building on the mobile terminal side can be rapidly determined on the display equipment side.
In some disclosed embodiments, the annotation information comprises textual information and voice information; the display device is responsive to the identifier being selected to display the image and text information and to play the voice information.
In some disclosed embodiments, the location information of the mobile terminal is also obtained when receiving the image sent by the mobile terminal. The display equipment determines a target sub-model in the three-dimensional space model based on the position information, and compares the image with three-dimensional elements in the target sub-model to obtain target three-dimensional elements; and labeling the target three-dimensional element based on the image.
Therefore, the target sub-model is determined by acquiring the position information of the mobile terminal, so that the complexity of image comparison is reduced by the display device, the load of the display device can be reduced, and the comparison efficiency is improved.
Referring to fig. 8, fig. 8 is a schematic flowchart illustrating a display method according to a fifth embodiment of the present application. The method is applied to the mobile terminal and comprises the following steps:
step 81: and acquiring an image, wherein the image is obtained by shooting the target building.
Step 82: sending the image to a display device so that the display device can compare the image with the three-dimensional elements in the three-dimensional space model, and confirming the target three-dimensional elements from the three-dimensional space model; the three-dimensional space model is constructed based on multi-frame collected images of a target building; and marking the target three-dimensional element, and displaying the marked mark on the three-dimensional space model.
The following description is made with reference to fig. 9 and 10:
as shown in fig. 9, the mobile terminal acquires a real object in the target building by using the image acquisition component to obtain an image. Wherein, when the image is collected, the image can be labeled. As shown in fig. 10, an additional label area is displayed, and a position label, a wall label, a floor label, and a remark bar are displayed in the additional label area.
After the user can select the corresponding label, the label can be dragged to the target position on the image. And collecting information input by the user in the remark column.
And sending the image, the label and the remark information to the display equipment.
After receiving the image, the label and the remark information, the display device marks and displays the corresponding three-dimensional element in the three-dimensional space model based on the image by using any of the steps applied to the embodiment of the display device.
According to the scheme, the image is compared with the three-dimensional elements of the three-dimensional space, the elements corresponding to the image are determined, and the elements are marked, so that the content of the mobile terminal can be displayed on the display equipment side, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building on the mobile terminal side can be rapidly determined on the display equipment side.
In an application scene, the mobile terminal is used for shooting, positioning and labeling a target object in the scene based on a high-precision map technology, and then dynamically displaying the target object on a large screen of display equipment in real time to realize linkage of the mobile terminal and the display equipment. According to the method and the device, people, things and objects can be positioned in real time, other positioning hardware cost is not required to be invested, and dynamic linkage display of the people, things and objects can be completed through the mobile terminal.
Specifically, a cell phone or a panoramic camera capture tool is used to capture image video data of a physical space. And (3) establishing a dense model of the physical space by using SFM, and completing 3D modeling of the whole target physical space, thereby obtaining a three-dimensional space model with the size of 1: 1. And is connected to the large screen display of the display device.
For example, the reconstructed three-dimensional space model is deployed to a cloud server, and is accessed to a large screen of a display device for displaying. The mobile terminal can be positioned and visually displayed on the display device.
Then, based on VPS realization, visual positioning is carried out on the mobile terminal; after the positioning is realized, related information can be marked, and the large screen of the display device is updated to the corresponding marked information in real time.
Based on the VPS technology, the image frames captured by the image acquisition assembly of the current mobile terminal are acquired and uploaded to the cloud server to be matched with the deployed three-dimensional space model, and the current position can be located after matching is successful.
Performing a labeling operation on the mobile terminal, wherein the labeling operation process comprises the following steps: the label type is selected, the image in the current camera view field is overlapped with the label, and the label can be classified according to actual business requirements, such as a position label, a wall label, a ground label and the like. When the label is printed, remarks of the label, such as characters, pictures, voice and the like, can be made according to needs.
And dynamically updating the label coordinates and the label information of the mobile terminal to the corresponding position of the three-dimensional space model of the display equipment for displaying.
The label coordinates of the mobile terminal are displayed on the position corresponding to the three-dimensional space model of the display device.
The tag information of the mobile terminal is used for clicking the corresponding tag on the display device to display the corresponding content, and further analysis and decision are obtained.
The user can click the label on the display device, and then can view the picture frame positioned by the mobile terminal, the type of the label and the related information uploaded in a related mode.
The display device can dynamically update the dynamic label of the mobile terminal. The updating frequency can be set according to the service requirement, and is set every 5 minutes, 6 minutes, 1 hour, 2 hours and the like.
By the mode, people, things and objects can be positioned more efficiently based on the high-precision map, and the display is updated in a linkage manner on a large screen. Hardware cost is saved, the use threshold is lower, the labels can be classified, and subsequent analysis and decision are facilitated.
Such as AR inspection. The inspection personnel need regularly patrol the place. The area of the inspection site is large and the inspection content is large. When an abnormal condition occurs, it is unclear where a specific position is, and it takes a certain time to communicate the specific position. By using the technical scheme provided by the application, the patrol personnel open the mobile terminal, and the superposition of the labels is carried out on each object, so that the relevant position information can be acquired on the large screen of the display equipment for displaying. Meanwhile, when the abnormal condition occurs, the patrol personnel only need to take pictures and overlap labels to know and record the current abnormal condition and the position of the occurrence of the event. And related personnel on one side of the display equipment can go to the processing only by looking up the corresponding label, so that the time for searching and communicating is saved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a display device according to an embodiment of the present disclosure. The display device 110 includes a receiving module 111, a processing module 112, and an annotation module 113. Wherein the content of the first and second substances,
the receiving module 111 is configured to receive an image sent by the mobile terminal, where the image is obtained by shooting a target building;
the processing module 112 is configured to compare the image with a three-dimensional element in the three-dimensional space model, and determine a target three-dimensional element from the three-dimensional space model; the three-dimensional space model is constructed based on multi-frame collected images of a target building;
the labeling module 113 is configured to label the target three-dimensional element, and display a labeled identifier on the three-dimensional space model.
According to the scheme, the image is compared with the three-dimensional elements of the three-dimensional space, the elements corresponding to the image are determined, and the elements are marked, so that the content of the mobile terminal can be displayed on the display equipment side, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building on the mobile terminal side can be rapidly determined on the display equipment side.
In some disclosed embodiments, the processing module 112 is further configured to acquire a target region in the image; the target area is obtained by labeling a corresponding area in the image; and comparing the target area with the three-dimensional elements in the three-dimensional space model to confirm the target three-dimensional elements from the three-dimensional space model.
In some disclosed embodiments, the processing module 112 is further configured to perform feature extraction on the image to obtain feature information; and comparing the characteristic information with the three-dimensional elements in the three-dimensional space model, and confirming the target three-dimensional elements from the three-dimensional space model.
In some disclosed embodiments, the receiving module 111 is further configured to receive annotation information sent by the mobile terminal, where the annotation information is associated with an image; the labeling module 113 is further configured to label the target three-dimensional element by using the image and the labeling information, and display a labeled identifier on the three-dimensional space model.
In some disclosed embodiments, the display device 110 further includes a display module (not shown) for displaying the image and the annotation information in response to the identifier being selected.
In some disclosed embodiments, the display module is further configured to display image and text information in response to the identifier being selected, and the processing module 112 is further configured to play voice information.
In some embodiments, the labeling module 113 is further configured to label the target three-dimensional element with the image as labeling information, and display the labeled identifier on the three-dimensional space model.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an embodiment of an electronic device provided in the present application. The electronic device 120 comprises a processor 121, and a memory 122 and a display screen 123 coupled to the processor 121, wherein the processor 121 is configured to execute program instructions stored in the memory to implement the steps of any of the above-described embodiments of the display method.
In one particular implementation scenario, the electronic device 120 may include, but is not limited to: a microcomputer and a server, and in addition, the electronic device 120 may further include a mobile device such as a notebook computer, a tablet computer, a smart phone and a wearable device, which is not limited herein.
In particular, the processor 121 is configured to control itself and the memory 122 to implement the steps of any of the above-described embodiments of the display method. Processor 121 may also be referred to as a CPU (Central Processing Unit). The processor 121 may be an integrated circuit chip having signal processing capabilities. The Processor 121 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 121 may be commonly implemented by integrated circuit chips.
According to the scheme, the image is compared with the three-dimensional elements of the three-dimensional space, the elements corresponding to the image are determined, and the elements are marked, so that the content of the mobile terminal can be displayed on the display equipment side, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building on the mobile terminal side can be rapidly determined on the display equipment side.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an embodiment of a computer-readable storage medium 130 according to the present application. The computer readable storage medium 130 stores program instructions 131 capable of being executed by a processor, the program instructions 131 being for implementing the steps of any of the above-described embodiments of the display method.
According to the scheme, the image is compared with the three-dimensional elements of the three-dimensional space, the elements corresponding to the image are determined, and the elements are marked, so that the content of the mobile terminal can be displayed on the display equipment side, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building on the mobile terminal side can be rapidly determined on the display equipment side.
Referring to fig. 14, fig. 14 is a schematic structural diagram of an embodiment of a display system of the present application. The display system 140 includes a mobile terminal 141 and a display device 142 communicatively connected; the display device 142 is configured to implement the steps of the display method embodiment applied to the display device, and the mobile terminal 141 is configured to implement the steps of the display method embodiment applied to the mobile terminal.
According to the scheme, the image is compared with the three-dimensional elements of the three-dimensional space, the elements corresponding to the image are determined, and the elements are marked, so that the content of the mobile terminal can be displayed on the display equipment side, the linkage of the mobile terminal and the display equipment is realized, and the information of the target building on the mobile terminal side can be rapidly determined on the display equipment side.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like.
The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (12)

1. A display method is applied to a display device, and the method comprises the following steps:
receiving an image sent by a mobile terminal, wherein the image is obtained by shooting a target building;
comparing the image with three-dimensional elements in a three-dimensional space model, and confirming target three-dimensional elements from the three-dimensional space model; the three-dimensional space model is constructed based on multi-frame collected images of a target building;
and labeling the target three-dimensional element, and displaying a labeled identifier on the three-dimensional space model.
2. The method of claim 1, wherein comparing the image to three-dimensional elements in a three-dimensional model to identify a target three-dimensional element from the three-dimensional model comprises:
acquiring a target area in the image; the target area is obtained by labeling a corresponding area in the image;
and comparing the target area with the three-dimensional elements in the three-dimensional space model, and confirming the target three-dimensional elements from the three-dimensional space model.
3. The method of claim 1, wherein comparing the image to three-dimensional elements in a three-dimensional model to identify a target three-dimensional element from the three-dimensional model comprises:
extracting the features of the image to obtain feature information;
and comparing the characteristic information with the three-dimensional elements in the three-dimensional space model, and confirming the target three-dimensional elements from the three-dimensional space model.
4. The method of claim 1, further comprising:
receiving annotation information sent by a mobile terminal, wherein the annotation information is associated with the image;
the marking the target three-dimensional element and displaying the marked identifier on the three-dimensional space model comprises the following steps:
and labeling the target three-dimensional element by using the image and the labeling information, and displaying a labeled identifier on the three-dimensional space model.
5. The method of claim 4, further comprising:
and responding to the selected identifier, displaying the image and the annotation information.
6. The method of claim 5, wherein the annotation information comprises textual information and voice information;
the displaying the image and the annotation information in response to the identifier being selected includes:
and responding to the selected identifier, displaying the image and the text information, and playing the voice information.
7. The method of claim 1, wherein labeling the target three-dimensional element and displaying the labeled identifier on the three-dimensional space model comprises:
and taking the image as labeling information, labeling the target three-dimensional element, and displaying a labeled identifier on the three-dimensional space model.
8. A display method is applied to a mobile terminal, and the method comprises the following steps:
acquiring an image, wherein the image is obtained by shooting a target building;
sending the image to a display device so that the display device can compare the image with three-dimensional elements in a three-dimensional space model, and confirming target three-dimensional elements from the three-dimensional space model; the three-dimensional space model is constructed based on multi-frame collected images of a target building; and marking the target three-dimensional element, and displaying the marked mark on the three-dimensional space model.
9. An electronic device comprising a processor and a memory and a display screen coupled to the processor, the processor being configured to execute program instructions stored in the memory to implement the display method of any of claims 1-8.
10. A display system, characterized in that the display system comprises a mobile terminal and a display device which are connected in communication; the display device is used for realizing the display method according to any one of claims 1 to 7, and the mobile terminal is used for realizing the display method according to claim 8.
11. A computer-readable storage medium having stored thereon program instructions, which when executed by a processor implement the display method of any one of claims 1 to 7.
12. A display device, characterized in that the display device comprises:
the receiving module is used for receiving an image sent by the mobile terminal, wherein the image is obtained by shooting a target building;
the processing module is used for comparing the image with three-dimensional elements in a three-dimensional space model and confirming target three-dimensional elements from the three-dimensional space model; the three-dimensional space model is constructed based on multi-frame collected images of a target building;
and the marking module is used for marking the target three-dimensional element and displaying the marked mark on the three-dimensional space model.
CN202111658477.5A 2021-12-30 2021-12-30 Display method, display device, display system, electronic device, and storage medium Pending CN114299269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111658477.5A CN114299269A (en) 2021-12-30 2021-12-30 Display method, display device, display system, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111658477.5A CN114299269A (en) 2021-12-30 2021-12-30 Display method, display device, display system, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN114299269A true CN114299269A (en) 2022-04-08

Family

ID=80974379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111658477.5A Pending CN114299269A (en) 2021-12-30 2021-12-30 Display method, display device, display system, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN114299269A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761152A (en) * 2023-01-06 2023-03-07 深圳星坊科技有限公司 Image processing and three-dimensional reconstruction method and device under common light source and computer equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761152A (en) * 2023-01-06 2023-03-07 深圳星坊科技有限公司 Image processing and three-dimensional reconstruction method and device under common light source and computer equipment

Similar Documents

Publication Publication Date Title
CN109858371B (en) Face recognition method and device
KR20190032084A (en) Apparatus and method for providing mixed reality content
CN111046725B (en) Spatial positioning method based on face recognition and point cloud fusion of surveillance video
CN108200334B (en) Image shooting method and device, storage medium and electronic equipment
CN108958469B (en) Method for adding hyperlinks in virtual world based on augmented reality
CN111144356B (en) Teacher sight following method and device for remote teaching
CN107194968B (en) Image identification tracking method and device, intelligent terminal and readable storage medium
KR20160057867A (en) Display apparatus and image processing method thereby
CN111340848A (en) Object tracking method, system, device and medium for target area
CN112085534B (en) Attention analysis method, system and storage medium
CN107704851B (en) Character identification method, public media display device, server and system
CN112163503A (en) Method, system, storage medium and equipment for generating insensitive track of personnel in case handling area
CN111611871B (en) Image recognition method, apparatus, computer device, and computer-readable storage medium
CN111339943A (en) Object management method, system, platform, equipment and medium
CN114299269A (en) Display method, display device, display system, electronic device, and storage medium
CN111291638A (en) Object comparison method, system, equipment and medium
JP2015228564A (en) Monitoring camera system
WO2022205329A1 (en) Object detection method, object detection apparatus, and object detection system
CN116760937B (en) Video stitching method, device, equipment and storage medium based on multiple machine positions
CN112288876A (en) Long-distance AR identification server and system
CN110414322B (en) Method, device, equipment and storage medium for extracting picture
CN113569594A (en) Method and device for labeling key points of human face
CN110472551A (en) A kind of across mirror method for tracing, electronic equipment and storage medium improving accuracy
CN111177449B (en) Multi-dimensional information integration method based on picture and related equipment
CN116363725A (en) Portrait tracking method and system for display device, display device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination