CN115345927A - Exhibit guide method and related device, mobile terminal and storage medium - Google Patents

Exhibit guide method and related device, mobile terminal and storage medium Download PDF

Info

Publication number
CN115345927A
CN115345927A CN202210989319.6A CN202210989319A CN115345927A CN 115345927 A CN115345927 A CN 115345927A CN 202210989319 A CN202210989319 A CN 202210989319A CN 115345927 A CN115345927 A CN 115345927A
Authority
CN
China
Prior art keywords
exhibit
target
mobile terminal
picture
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210989319.6A
Other languages
Chinese (zh)
Inventor
马骞女
揭志伟
孙红亮
王子彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202210989319.6A priority Critical patent/CN115345927A/en
Publication of CN115345927A publication Critical patent/CN115345927A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses exhibit guide method and relevant device, mobile terminal and storage medium, wherein the exhibit guide method comprises the following steps: responding to the recognition of the target exhibit in the shooting picture of the mobile terminal, and detecting the shooting picture to obtain the image position of the key part of the target exhibit in the shooting picture; the target exhibit is an exhibit which is preset with explanation data, and the explanation data comprises explanation information of each key part of the target exhibit; displaying an AR indicator at the image location; and responding to the AR indication mark triggered by the user, and outputting the explanation information of the key part. By means of the scheme, navigation experience can be improved.

Description

Exhibit guide method and related device, mobile terminal and storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method for guiding an exhibit, a related device, a mobile terminal, and a storage medium.
Background
At present, manual explanation is usually used for navigation, and the cost is high. With the development of electronic information technology, voice explanation is performed in the modes of code scanning, voice explanation and the like to realize navigation, and the electronic information technology is more and more popular.
However, no matter in the code scanning mode or the voice interpreter mode, there is little interaction in the navigation process, and the interactive experience is poor. Therefore, how to enhance the navigation experience is an urgent problem to be solved.
Disclosure of Invention
The application provides an exhibit guide method, a related device, a mobile terminal and a storage medium.
The application provides a method for guiding exhibits in a first aspect, comprising: responding to the recognition of the target exhibit in the shooting picture of the mobile terminal, and detecting the shooting picture to obtain the image position of the key part of the target exhibit in the shooting picture; the target exhibit is an exhibit which is preset with explanation data, and the explanation data comprises explanation information of each key part of the target exhibit; displaying an AR indicator at the image location; and responding to the AR indication mark triggered by the user, and outputting the explanation information of the key part.
Therefore, in response to the fact that the target exhibit is recognized in the shooting picture of the mobile terminal, the shooting picture is detected, the image position of the key part of the target exhibit in the shooting picture is obtained, the target exhibit is an exhibit which is provided with explanation data in advance, the explanation data comprises explanation information of each key part of the target exhibit, based on the explanation information, the AR indication mark is displayed at the image position, the AR indication mark is triggered in response to a user, and the explanation information of the key part is output.
Before the step of identifying the target exhibit in the shot picture of the mobile terminal in response to the shot picture, detecting the shot picture and obtaining the image position of the key part of the target exhibit in the shot picture, the method further comprises the following steps: matching the shot picture with each exhibit model respectively to obtain a matching result; wherein, the matching result includes: matching degree of the current exhibit with each exhibit model respectively; analyzing based on the matching result to obtain the recognition result of the shot picture; wherein, the recognition result includes: whether the current exhibit is a target exhibit.
Therefore, the shooting picture is matched with each exhibit model respectively to obtain a matching result, the matching result comprises the matching degree of the current exhibit model and each exhibit model respectively, the matching result is analyzed based on the matching result to obtain the recognition result of the shooting picture, and the recognition result comprises whether the current exhibit is a target exhibit, so that whether the target exhibit exists in the shooting picture can be recognized in a model matching mode, and the accuracy of exhibit recognition can be improved.
Wherein, the analysis is carried out based on the matching result to obtain the recognition result of the shot picture, and the method comprises the following steps: and in response to the fact that the maximum matching degree is higher than a preset threshold value, taking the exhibit to which the exhibit model corresponding to the maximum matching degree belongs as a target exhibit, and determining that the identification result comprises that the current exhibit is the target exhibit.
Therefore, under the condition that the maximum matching degree is greater than the preset threshold value, the exhibit to which the exhibit model corresponding to the maximum matching degree belongs is taken as the target exhibit, and the recognition result including the current exhibit is determined as the target exhibit, so that the exhibit model which has the maximum matching degree with the current exhibit and is greater than the preset threshold value can be screened out in the exhibit recognition process, and the target exhibit and the recognition result are determined based on the exhibit model, so that the accuracy of recognizing the target exhibit is further improved.
The exhibit model comprises a plurality of key parts, and explanation information of the key parts is marked on model positions corresponding to the key parts of the exhibit model.
Therefore, the exhibit to which the exhibit model belongs contains a plurality of key parts, and the exhibit model is marked with the explanation information of the key parts on the model position that the key parts correspond, thereby the key parts in the shooting picture can be directly detected according to the model position marked with the explanation information subsequently, and the explanation information of the key parts is directly obtained from the exhibit model, namely, the key parts are detected and the explanation information is obtained, which can be realized based on the exhibit model, and the efficiency of exhibiting guide is favorably improved.
Wherein, the target exhibit has a plurality of key parts, the method further comprises: in response to the presence of the undetected critical location, outputting a first prompt; the first prompt is used for prompting the adjustment of the shooting pose of the mobile terminal so as to shoot key parts which are not detected.
Therefore, the target exhibit has a plurality of key parts, the first prompt is output in response to the existence of the undetected key parts, and the first prompt is used for prompting the adjustment of the shooting pose of the mobile terminal so as to shoot the undetected key parts, so that the user can be prompted by outputting the first prompt under the condition that the target photo has a plurality of key parts and the undetected key parts exist, so that the user can know all the key parts of the target exhibit in the process of visiting the target exhibit as much as possible, and the interactive experience of exhibit guide is favorably improved.
Wherein, prior to outputting the first prompt, the method further comprises: acquiring a first position of an undetected key part on a target exhibit, and acquiring the current pose of a mobile terminal; and analyzing based on the first position and the current pose to obtain the shooting pose of the mobile terminal to be adjusted.
Therefore, before the first prompt is output, the first position of the undetected key part on the target exhibit is acquired, the current pose of the mobile terminal is acquired, and the shooting pose required to be adjusted by the mobile terminal is obtained through analysis based on the first position and the current pose.
Wherein, the method further comprises: and responding to the situation that the user adjusts the mobile terminal to a new shooting pose and identifies the target exhibit in a new shooting picture of the mobile terminal, and displaying the AR indication mark in the new shooting pose.
Therefore, the mobile terminal is adjusted to a new shooting pose by the user, the target exhibit is identified in a new shooting picture of the mobile terminal, and the AR indication mark under the new shooting pose is displayed, so that the AR indication mark can be supported to be displayed in a following mode after the shooting pose is adjusted by the user, and further the interaction experience of exhibit guide can be favorably improved.
Wherein, the method further comprises: responding to the existence of the related exhibit related to the target exhibit in the exhibition hall, and outputting a second prompt; and the second prompt is used for prompting the user to visit the associated exhibit.
Therefore, the second prompt is output in response to the fact that the related exhibit related to the target exhibit exists in the exhibition hall, and the second prompt is used for prompting the user to visit the related exhibit, namely when the related exhibit related to the target exhibit currently visited by the user exists in the exhibition hall, the user is prompted to visit the related exhibit by outputting the second prompt, so that the guiding requirement of the user on the interested exhibit can be met as much as possible, and the interaction experience of exhibition guiding can be improved.
Wherein after outputting the second prompt, the method further comprises: in response to receiving a confirmation instruction of the user about visiting the associated exhibit, acquiring a second position of the associated exhibit in the exhibition hall, and acquiring the current pose of the mobile terminal; and displaying the AR navigation identification on the current picture of the mobile terminal based on the second position and the current pose.
Therefore, after the second prompt is output, the second position of the associated exhibit in the exhibition hall is obtained in response to receiving a confirmation instruction of the user about visiting the associated exhibit, and the current pose of the mobile terminal is obtained, so that the AR navigation mark is displayed on the current picture of the mobile terminal based on the second position and the current pose.
The present application in a second aspect provides an exhibit guide device, comprising: the mobile terminal comprises a detection module, an identification module and an interaction module, wherein the detection module is used for responding to the identification of a target exhibit in a shot picture of the mobile terminal, detecting the shot picture and obtaining the image position of a key part of the target exhibit in the shot picture; the target exhibit is an exhibit which is preset with explanation data, and the explanation data comprises explanation information of each key part of the target exhibit; the identification module is used for displaying the AR indication identification at the image position; and the interaction module is used for responding to the AR indication mark triggered by the user and outputting the explanation information of the key part.
A third aspect of the present application provides a mobile terminal, which includes a camera, a display screen, a memory and a processor, where the camera, the display screen and the memory are respectively coupled to the processor, and the processor is configured to execute program instructions stored in the memory, so as to implement the exhibit guiding method in the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the exhibit guide method of the first aspect described above.
According to the scheme, the target exhibit is identified in the shot picture of the mobile terminal in response, the shot picture is detected, the image position of the key part of the target exhibit in the shot picture is obtained, the target exhibit is the exhibit which is pre-provided with the explanation data, the explanation data comprises the explanation information of each key part of the target exhibit, the AR indication mark is displayed at the image position based on the explanation information, the AR indication mark is triggered in response to a user, and the explanation information of the key part is output.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a method for guiding exhibits according to the present application;
fig. 2a is a schematic view illustrating the effect of an embodiment of the exhibit guiding method of the present application;
fig. 2b is a schematic view illustrating the effect of another embodiment of the exhibit guiding method of the present application;
FIG. 2c is a schematic diagram illustrating the effect of another embodiment of the exhibit guiding method according to the present application;
fig. 2d is a schematic view illustrating the effect of another embodiment of the exhibit guiding method of the present application;
fig. 3a is a schematic effect diagram of another embodiment of the exhibit guiding method according to the present application;
FIG. 3b is a schematic diagram illustrating the effect of another embodiment of the exhibit guiding method according to the present application;
FIG. 4 is a schematic diagram of a frame of an embodiment of the exhibit guide apparatus of the present application;
FIG. 5 is a block diagram of an embodiment of a mobile terminal according to the present application;
FIG. 6 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The embodiments of the present application will be described in detail below with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of an exhibit guiding method according to the present application.
Specifically, the method may include the steps of:
step S11: and responding to the recognition of the target exhibit in the shooting picture of the mobile terminal, and detecting the shooting picture to obtain the image position of the key part of the target exhibit in the shooting picture.
In the embodiment of the disclosure, the target exhibit is an exhibit which is preset with explanation data, and the explanation data includes explanation information of each key part of the target exhibit.
In one implementation scene, each exhibit in the exhibition hall can be pre-provided with explanation data, namely each exhibit in the exhibition hall can be a target exhibit; alternatively, only part of the exhibits in the exhibition hall may be preset with the explanation data, that is, only part of the exhibits in the exhibition hall may be the target exhibits, and exemplarily, the exhibits with rare, unique or unique characteristics in the exhibition hall may be selected as the target exhibits, and the explanation data is preset for the target exhibits.
In an implementation scenario, the number of the key parts of the target exhibit may be at least one, that is, the target exhibit may have one key part, may also have two key parts, or may also have three or more key parts, which is not limited herein. Illustratively, taking the target exhibit as a porcelain bottle, the key parts thereof may include but are not limited to: a bottle opening, a bottle neck, a ring foot, a Chi ear, etc., without limitation. Other cases may be analogized, and no one example is given here.
In one implementation scenario, the interpretation information of the key parts may include, but is not limited to: text information, image information, audio information, video information, and the like, without limitation. Of course, a combination of the above-described plural kinds of information is also possible. For example, the explanation videos of the key parts of the target exhibit may be recorded in advance as the explanation information of the key parts by experts, scholars or researchers in the field to which the target exhibit belongs, or the explanation videos may be condensed into text information as the explanation information, or only audio information in the explanation videos may be extracted as the explanation information, which is not limited herein.
In one implementation scenario, in order to identify whether the target exhibit exists from the shot picture of the mobile terminal, the target exhibit may be shot from various angles in advance to obtain a plurality of images, and the plurality of images may be combined to form an image set of the target exhibit. For example, the target exhibit may be photographed from a plurality of viewing angles, such as a front view, a side view (e.g., a left view, a right view), and a top view, which are not limited herein. In addition, in order to reduce subsequent recognition interference as much as possible, other objects can be excluded from appearing in the shooting lens as much as possible during shooting, or objects except for the target exhibit in the image can be removed after shooting is finished. On the basis, when the shot picture is identified, the shot picture is only required to be compared with the image set of each target exhibit respectively to obtain the comparison score of each target exhibit, and if the maximum comparison score is larger than a preset threshold value, the target exhibit can be determined to exist in the shot picture, and the target exhibit in the shot picture can be determined to be the target exhibit corresponding to the maximum comparison score. Specifically, the image area of the current exhibit in the captured image may be detected first, based on which, for each image set of the target exhibit, the similarity between each image in the image set and the image area may be obtained, and the similarity corresponding to each image in the image set may be counted (for example, summed, averaged, weighted, etc.), so as to obtain the comparison score of the target exhibit.
In a specific implementation scenario, in order to improve the accuracy and efficiency of detecting the image area of the current exhibit in the captured picture, a target detection network may be trained in advance for detecting the image area of the current exhibit, and on this basis, the captured picture may be detected by using the target detection network to obtain the image area of the target exhibit in the captured picture. Illustratively, the object detection network may include, but is not limited to, a convolutional neural network, etc., and is not limited thereto. In the training process of the target detection network, sample images of a plurality of exhibits can be collected in advance, and the sample regions of the exhibits in the sample images are labeled. On the basis, the sample image can be detected by using the target detection network to obtain the prediction area of the exhibit in the sample image, and the network parameters of the target detection network are adjusted based on the difference between the sample image and the prediction area. It should be noted that, the specific measurement manner of the difference may refer to a loss function such as cross entropy, and the specific adjustment manner of the parameter may refer to an optimization manner such as gradient descent, which is not described herein again.
In a specific implementation scenario, in order to improve the accuracy and efficiency of measuring the similarity, a feature extraction network may be trained in advance for extracting image features, on this basis, a first feature of the image region may be extracted by using the feature extraction network, a second feature of each image in the image set may be extracted by using the feature extraction network, and the similarity between each image in the image set and the image region may be obtained based on the similarity (e.g., cosine similarity) between the first feature and the second feature. For example, the feature extraction network may include, but is not limited to, convolutional layer, etc., and the network structure of the feature extraction network is not limited herein. In the training process of the feature extraction network, sample images of a plurality of exhibits can be collected in advance, the feature extraction network is utilized to extract sample features of the sample images respectively, and for each sample image, the sample image belonging to the same exhibit can be used as a positive example image, and the sample image belonging to different exhibits can be used as a negative example image, so that the loss value of the feature extraction network can be obtained based on the difference between the sample features of the sample images and the sample features of the positive example images thereof, and the difference between the sample features of the sample images and the sample features of the negative example images thereof. On the basis, the network parameters of the feature extraction network can be adjusted based on the loss value. It should be noted that, the specific measurement manner of the difference may refer to a loss function such as a triple loss, and the specific adjustment manner of the parameter may refer to an optimization manner such as a gradient descent, which is not described herein again.
In a specific implementation scenario, the image set of the target exhibit may include not only images captured of the target exhibit from different angles, but also images of key parts of the target exhibit. Under the condition that the target exhibit is identified in the shooting picture, the image area of the current exhibit in the shooting picture can be respectively matched with the images of all key parts of the target exhibit through feature points, and the matching degree of the key parts and the image positions of the key parts in the shooting picture are obtained. On the basis, the key parts with the matching degree larger than the preset threshold value can be screened, and the screened key parts in the shot picture are determined. It should be noted that, the feature point matching may include, but is not limited to: ORB (i.e., organized FAST and Rotated BRIEF), SIFT (Scale Invariant Feature Transform), etc., and the like, but are not limited thereto.
In one implementation scenario, in order to improve the accuracy of identifying the target exhibits, it is also possible to pre-construct an exhibit model of each target exhibit, unlike the aforementioned pre-collecting image sets of each target exhibit. On the basis, the shot picture can be matched with the exhibit models of all the target exhibits respectively to obtain a matching result, the matching result comprises the matching degree of the current exhibit in the shot picture with all the exhibit models respectively, so that analysis can be carried out based on the matching result to obtain an identification result of the shot picture, and the identification result comprises whether the current exhibit is the target exhibit or not. By the mode, whether the target exhibit exists in the shot picture can be identified in a mode of model matching, and then the accuracy of exhibit identification can be favorably improved.
In a specific implementation scenario, the target exhibit may be photographed From different angles to obtain a series of two-dimensional images including visual Motion information, and the two-dimensional images may be three-dimensionally reconstructed based on SFM (Structure From Motion) and other manners to obtain an exhibit model of the target exhibit. The specific process of three-dimensional reconstruction may refer to technical details of three-dimensional reconstruction methods such as SFM, and is not described herein again. In addition, the exhibit model in the embodiment of the disclosure can be subjected to operations such as model rendering, so that the exhibit model has the characteristics of texture, color and the like of the target exhibit. Illustratively, still taking the target exhibit "porcelain bottle" as an example, the finally generated exhibit model not only has the appearance features of the target exhibit "porcelain bottle", but also has the texture features thereof, such as lines with different thicknesses on the surface of the "porcelain bottle", and also has the color features, such as various glazes on the surface of the "porcelain bottle", and the like, and is not limited herein.
In a specific implementation scenario, as described above, the image area of the current exhibit in the captured image may be detected, and then the image area may be matched with the exhibit model of each target exhibit based on the image area. The specific detection method of the image region can refer to the related description, and is not described herein again. On the basis, for the exhibit model of each target exhibit, the match with the image area can be carried out, and the matching degree of the exhibit model of the target exhibit and the current exhibit is obtained. Specifically, the pixel points can be projected to a three-dimensional space based on two-dimensional coordinates, depth values, camera positions and camera parameters of the mobile terminal and the like in the image area to obtain projection points, and point cloud data composed of the projection points can be obtained, so that matching degrees between the point cloud data and the exhibit models of the target exhibits can be compared. It should be noted that the camera pose may be determined based on visual positioning manners such as SLAM (Simultaneous Localization and Mapping), and specific reference may be made to technical details of the visual positioning manners such as SLAM, which are not described herein again.
In a specific implementation scenario, different from the aforementioned method of detecting the image region of the current exhibit and then matching the image region with the exhibit model, the method may also include extracting feature points from a shot image, and then matching each exhibit model based on the extracted feature points. It should be noted that the feature points may include, but are not limited to: structural feature points, texture feature points, and the like, which are not limited herein. For the specific process of extracting feature points, reference may be made to the technical details of feature point detection, such as ORB (organized FAST and related BRIEF), SURF (speed-Up route Features), and the like, which are not described herein again.
In a specific implementation scenario, after obtaining the matching degrees between the exhibit models of each target exhibit and the current exhibit, the maximum matching degree may be obtained, and whether the maximum matching degree is greater than a preset threshold value is detected, if so, the exhibit to which the exhibit model corresponding to the maximum matching degree belongs may be taken as the target exhibit, and it is determined that the identification result includes that the current exhibit is the target exhibit. That is to say, when the maximum matching degree is greater than the preset threshold, the exhibit to which the exhibit model corresponding to the maximum matching degree belongs may be directly determined as the current exhibit appearing in the shooting picture. Otherwise, the target exhibit does not exist in the shooting picture. It should be noted that the preset threshold may be specifically set according to the actual application requirement. For example, in a case where the identification precision of the target exhibit is required to be high, the preset threshold may be set to be slightly larger, and in a case where the identification precision of the target exhibit is required to be relatively loose, the preset threshold may be set to be appropriately smaller, which is not limited herein. By the method, the matching degree with the current exhibit is maximum in the exhibit identification process, the match degree is larger than the exhibit model with the preset threshold value, and the target exhibit and the identification result are determined based on the match degree, so that the accuracy of identifying the target exhibit is further improved.
In a specific implementation scenario, the exhibit to which the exhibit model belongs includes a plurality of key parts, and the exhibit model may be marked with the explanation information of the key parts at the model positions corresponding to the key parts. Still taking the example of the exhibit being a "porcelain bottle", the key parts may include but are not limited to: bottle openings, bottlenecks, circles, ears, and the like. On the basis, explanation information of the bottle mouth is marked at the model position corresponding to the bottle mouth of the key part on the exhibit model of the porcelain bottle, explanation information of the bottle neck is marked at the model position corresponding to the bottle neck of the key part, explanation information of the ring leg is marked at the model position corresponding to the ring leg of the key part, and explanation information of the ring leg is marked at the model position corresponding to the ring leg of the key part. Other cases may be analogized and are not illustrated here.
In a specific implementation scenario, when the target exhibit is identified in the captured picture, images of key portions may be extracted from the exhibit model of the target exhibit, and feature point matching may be performed on the image area of the current exhibit in the captured picture and the images of the key portions of the target exhibit, respectively, to obtain matching degrees of the key portions and image positions of the key portions in the captured picture. On the basis, the key parts with the matching degree larger than a preset threshold value can be screened, and the screened key parts in the shot picture are determined.
Step S12: at the image position, an AR indicator is displayed.
Specifically, after the target exhibit is recognized in the shooting picture and the image position of the key part of the target exhibit is detected in the shooting picture, the AR indication mark can be displayed on each image position of the shooting picture. It should be noted that the AR indicator may be specifically represented by an indicator having a shape, such as a magnifying glass, a horn, and the like, and is not limited herein.
Step S13: and responding to the AR indication mark triggered by the user, and outputting the explanation information of the key part.
In one implementation scenario, the triggering manner of the AR indicator may include, but is not limited to: single click, long press, multiple consecutive clicks, etc., without limitation.
In an implementation scenario, please refer to fig. 2a and fig. 2b in combination, wherein fig. 2a is a schematic diagram illustrating an effect of an embodiment of the exhibit guiding method of the present application, and fig. 2b is a schematic diagram illustrating an effect of another embodiment of the exhibit guiding method of the present application. As shown in fig. 2a and 2b, two key parts of two target exhibits exist in the shot picture, one of the two key parts is "bottle ear", the other is "pattern", and AR indicators (shown as magnifying glasses in fig. 2a and 2 b) are respectively displayed at the image positions of the shot picture at the two key parts, and a user can trigger any one of the AR indicators. As shown in fig. 2b, the explanation information may be text information, and after triggering the AR indicator corresponding to the "pattern", a text box may be superimposed on the shooting screen, and the explanation information of the "pattern" (e.g., "pattern is 8230; \8230; meaning 8230; \8230;) is displayed in the text box in fig. 2 b; or the explanation information can also be audio information, and the explanation information of the 'patterns' can be played after the AR indication marks corresponding to the 'patterns' are triggered; or the explanation information may also be video information, and after triggering the AR indication identifier corresponding to the "pattern", the explanation information of the "pattern" may be played in the form of a floating window.
In one implementation scenario, the target exhibit may have a plurality of key portions, and still take the target exhibit "vase" as an example, as shown in fig. 2a or fig. 2b, the target exhibit "vase" has 2 key portions (i.e., "vase ear" and "pattern") at the current viewing angle, and please refer to fig. 2c, fig. 2c is a schematic diagram of the effect of another embodiment of the exhibit guiding method of the present application, and the target exhibit "vase" has 1 key portion (i.e., a notched "round foot") at the back. Other cases may be analogized, and no one example is given here. In this case, a first prompt may be output in response to the existence of the undetected key part, and the first prompt is used to prompt adjustment of the shooting pose of the mobile terminal to shoot the undetected key part. By means of the method, the user can be prompted by outputting the first prompt under the condition that the target photo has a plurality of key parts and the key parts are not detected, so that the user can know all the key parts of the target exhibit in the process of visiting the target exhibit as far as possible, and the interactive experience of exhibit guide is favorably improved.
In a specific implementation scenario, as described above, images of key portions of the target exhibit may be captured in advance, and after the key portions in the captured image are detected by matching the captured image with the images of the key portions, the key portions that are not matched with the target exhibit may be regarded as key portions that have not been detected. Alternatively, as described above, if an exhibit model of a target exhibit is constructed in advance and interpretation information of a key part is marked on a model position of the exhibit model corresponding to the key part, a captured image may be matched with an image extracted from each key part of the exhibit model, and after the key part in the captured image is detected, a key part not matched with the target exhibit may be set as an undetected key part, and in this case, the first prompt may be output.
In a specific implementation scenario, before outputting the first prompt, the first position of the undetected key part on the target exhibit may be obtained first, and the current pose of the mobile terminal may be obtained, so that analysis may be performed based on the first position and the current pose to obtain the shooting pose that needs to be adjusted for the mobile terminal. On the basis, a first prompt can be generated based on the shooting pose, and the first prompt is output. It should be noted that the first position of the undetected key part on the target exhibit may be determined and obtained by a pre-constructed exhibit model, or the position of each key part on the target exhibit may be marked by pre-capturing an image of each key part of the target exhibit, so that the first position of the undetected key part on the target exhibit may be determined by the marked position. In addition, the current pose of the mobile terminal can be obtained through visual positioning such as SLAM, which is not limited herein. After the first position and the current pose are obtained, the shooting pose to be adjusted can be estimated, such as a right rotation of 20 degrees, a left rotation of 30 degrees, and the like, which is not limited herein. Referring to fig. 2d, fig. 2d is a schematic diagram illustrating an effect of another embodiment of the exhibit guiding method according to the present application. As shown in fig. 2d, it can be determined through the above calculation that the shooting pose to be adjusted is rotated by 180 degrees in the right direction, and then a first prompt "please rotate 180 degrees in the right direction to shoot a key part on the back" can be generated. By the mode, in the exhibition guiding process, the shooting pose required to be adjusted by the mobile terminal can be determined by combining the first position of the undetected key part on the target exhibit and the current pose of the mobile terminal under the condition that the undetected key part exists, so that the accuracy of the shooting pose can be improved.
In one specific implementation scenario, in response to a user adjusting the mobile terminal to a new shooting pose and identifying a target exhibit in a new shooting picture of the mobile terminal, the AR indicator in the new shooting pose may be displayed. For example, the image positions of the key parts of the target exhibit in the new shot picture may be detected again, and the AR indicator may be displayed at each detected image position in the new shot picture. Alternatively, for example, the image position of the non-triggered part in the new shooting picture may be detected, and at the image position of the non-triggered part, an AR indication representation may be displayed, where the non-triggered part is a key part other than the triggered part, and the triggered part is a key part corresponding to the AR indication identifier triggered by the user. Taking the situation shown in fig. 2a to 2d as an example, after the user rotates 180 degrees along the right hand side according to the first prompt, the picture shown in fig. 2c can be obtained by shooting, at this time, the above detection process can be repeated, and the key part "bottle ear" and the key part "foot ring" are detected in the shot picture. Since the user has triggered the AR indicator corresponding to the key part "ear of bottle", it can be determined that the un-triggered part has only "circle foot", so that an AR indicator representation can be displayed at the image position of the key part "circle foot" in the shot picture (as shown in the magnifying glass in fig. 2 c). Other cases may be analogized, and no one example is given here. According to the mode, after the shooting pose is adjusted by the user, the triggered part is not repeatedly detected in a new shooting picture, so that the interference of the triggered part on the user interaction after the on-position pose is adjusted can be reduced, and the interaction experience of exhibition guide can be favorably improved.
In one implementation scenario, various exhibits may be present in the exhibition hall, and some exhibits are related to the target exhibit currently visited by the user, for example, some exhibits have the same or similar key parts as the target exhibit currently visited by the user, or some exhibits and the target exhibit currently visited by the user have historical continuations on the key parts, and the like, and these exhibits may be taken as the related exhibits of the target exhibit. In this case, the second prompt may be output in response to the presence of the associated exhibit related to the target exhibit in the exhibition hall, and the second prompt is used to prompt the user to visit the associated exhibit. Referring to fig. 3a, fig. 3a is a schematic view illustrating an effect of another embodiment of the exhibit guiding method according to the present application. As shown in fig. 3a, after the user interacts with the undetected key part (e.g., "circle foot" in fig. 2C), it is detected that there is still an associated exhibit in the exhibition hall C1 area related to the currently visited target exhibit, at this time, a second prompt "is there an associated exhibit in the exhibition hall C1 area related to the current exhibit, go to visit? ", and outputs the second prompt in a pop-up form (e.g., top pop-up). Of course, the second prompt may also be output in other forms (e.g., a voice prompt, etc.), which is not limited herein. By the mode, when the related exhibits related to the target exhibits currently visited by the user exist in the exhibition hall, the user is prompted to visit the related exhibits by outputting the second prompt, so that the exhibition guiding requirement of the user on the interested exhibits can be met as much as possible, and the exhibition guiding interactive experience can be improved.
In a specific implementation scenario, in a case that a plurality of related exhibits related to the target exhibit exist in the exhibition hall, the second prompt may further include options respectively corresponding to the plurality of related exhibits, so that the user may select the related exhibits in a case that the user needs to visit the related exhibits.
In a specific implementation scenario, in response to receiving a confirmation instruction of the user about visiting the associated exhibit, the second position of the associated exhibit in the exhibition hall may be acquired, and the current pose of the mobile terminal may be acquired. On the basis, the AR navigation mark can be displayed on the current picture of the mobile terminal based on the second position and the current pose. For example, a three-dimensional map of the exhibition hall may be constructed in advance, so that the second position of the associated exhibit in the exhibition hall may be acquired based on the three-dimensional map. In addition, the current pose of the mobile terminal can be obtained based on visual positioning such as SLAM. On the basis, path planning can be carried out through the second position and the current pose, and the AR navigation mark is displayed through the navigation path obtained through planning and the current pose obtained through real-time positioning until the user navigates to the associated exhibit in the exhibition hall. Referring to fig. 3a and fig. 3b in combination, fig. 3b is a schematic diagram illustrating an effect of another embodiment of the exhibit guiding method of the present application. As shown in fig. 3a, in the case that the user selects "yes", it may be considered that a confirmation instruction of the user about visiting the associated exhibit is received, and at this time, through the above-mentioned process, an AR navigation mark (e.g., a right-turn arrow in fig. 3 b) may be displayed on the current screen to navigate the user to the associated exhibit. By the method, after the user determines to visit the associated exhibit, the user can be navigated by displaying the AR navigation identifier on the current picture of the mobile terminal according to the second position of the associated exhibit in the exhibition hall and the current pose of the mobile terminal, and the interaction experience of exhibition guide is favorably improved.
Above-mentioned scheme, in response to discerning the target showpiece in mobile terminal's the shooting picture, detect the shooting picture, obtain the image position of the key position of target showpiece in the shooting picture, and the target showpiece is the showpiece that is equipped with the explanation data in advance, the explanation data includes the explanation information of each key position of target showpiece, show AR identification in image position department based on this, and trigger AR identification in response to the user, the explanation information of output key position, so at the showpiece guide in-process, the user can shoot the showpiece through mobile terminal and discern, and under the condition that the discernment is the target showpiece, show AR identification on the image position in the shooting picture through the key position of target showpiece and come to carry out interactive interaction with the user, thereby can realize the showpiece guide through the interactive form that interactive triggering explanation, and promote explanation efficiency, and then can promote the guide experience.
Referring to fig. 4, fig. 4 is a schematic diagram of a framework of an embodiment of the exhibit guiding device 40 of the present application. The exhibit guide apparatus 40 includes: the mobile terminal comprises a detection module 41, an identification module 42 and an interaction module 43, wherein the detection module 41 is used for responding to the identification of a target exhibit in a shot picture of the mobile terminal, and detecting the shot picture to obtain the image position of a key part of the target exhibit in the shot picture; the target exhibit is an exhibit which is preset with explanation data, and the explanation data comprises explanation information of each key part of the target exhibit; an identification module 42 for displaying an AR indicator at the image location; and the interaction module 43 is used for responding to the AR indication mark triggered by the user and outputting the explanation information of the key part.
Above-mentioned scheme, in response to discerning the target showpiece in mobile terminal's the shooting picture, detect the shooting picture, obtain the image position of the key position of target showpiece in the shooting picture, and the target showpiece is the showpiece that is equipped with the explanation data in advance, the explanation data includes the explanation information of each key position of target showpiece, show AR identification in image position department based on this, and trigger AR identification in response to the user, the explanation information of output key position, so at the showpiece guide in-process, the user can shoot the showpiece through mobile terminal and discern, and under the condition that the discernment is the target showpiece, show AR identification on the image position in the shooting picture through the key position of target showpiece and come to carry out interactive interaction with the user, thereby can realize the showpiece guide through the interactive form that interactive triggering explanation, and promote explanation efficiency, and then can promote the guide experience.
In some disclosed embodiments, the exhibit guiding device 40 further includes a matching module, configured to match the shot picture with each exhibit model respectively to obtain a matching result; wherein, the matching result includes: shooting the matching degree of the current exhibit in the picture and each exhibit model respectively; the exhibit guiding device 40 further includes a result analyzing module for analyzing based on the matching result to obtain the recognition result of the shot picture; wherein, the recognition result includes: whether the current exhibit is a target exhibit.
Therefore, the shooting picture is respectively matched with each exhibit model to obtain a matching result, the matching result comprises the matching degree of the current exhibit in the shooting picture with each exhibit model, the matching result is analyzed based on the matching result to obtain the identification result of the shooting picture, and the identification result comprises whether the current exhibit is a target exhibit, so that whether the target exhibit exists in the shooting picture can be identified through a model matching mode, and the accuracy of exhibit identification can be favorably improved.
In some disclosed embodiments, the analysis module includes a selection sub-module configured to, in response to that the maximum matching degree is higher than a preset threshold, take an exhibit to which the exhibit model corresponding to the maximum matching degree belongs as a target exhibit, and the analysis module includes a determination sub-module configured to determine that the recognition result includes that the current exhibit is the target exhibit.
Therefore, under the condition that the maximum matching degree is greater than the preset threshold value, the exhibit to which the exhibit model corresponding to the maximum matching degree belongs is taken as the target exhibit, and the recognition result including the current exhibit is determined as the target exhibit, so that the exhibit model which has the maximum matching degree with the current exhibit and is greater than the preset threshold value can be screened out in the exhibit recognition process, and the target exhibit and the recognition result are determined based on the exhibit model, so that the accuracy of recognizing the target exhibit is further improved.
In some disclosed embodiments, the exhibit to which the exhibit model belongs includes a plurality of key parts, and the exhibit model is marked with the explanation information of the key parts on the model positions corresponding to the key parts.
Therefore, the exhibit to which the exhibit model belongs contains a plurality of key parts, and the exhibit model is marked with the explanation information of the key parts on the model position that the key parts correspond, thereby the key parts in the shooting picture can be directly detected according to the model position marked with the explanation information subsequently, and the explanation information of the key parts is directly obtained from the exhibit model, namely, the key parts are detected and the explanation information is obtained, which can be realized based on the exhibit model, and the efficiency of exhibiting guide is favorably improved.
In some disclosed embodiments, the target exhibit has a plurality of key parts, the exhibit guide apparatus 40 further includes a first prompt module for outputting a first prompt in response to the presence of the key part which is not detected; the first prompt is used for prompting the adjustment of the shooting pose of the mobile terminal so as to shoot key parts which are not detected.
Therefore, the target exhibit has a plurality of key parts, the first prompt is output in response to the existence of the undetected key parts, and the first prompt is used for prompting the adjustment of the shooting pose of the mobile terminal so as to shoot the undetected key parts, so that the user can be prompted by outputting the first prompt under the condition that the target photo has a plurality of key parts and the undetected key parts exist, so that the user can know all the key parts of the target exhibit in the process of visiting the target exhibit as much as possible, and the interactive experience of exhibit guide is favorably improved.
In some disclosed embodiments, the exhibit guiding apparatus 40 further includes a first obtaining module, configured to obtain a first position of the undetected key part on the target exhibit, and obtain a current pose of the mobile terminal; the exhibit guiding device 40 further includes a pose analyzing module, configured to analyze based on the first position and the current pose, so as to obtain a shooting pose of the mobile terminal that needs to be adjusted.
Therefore, before the first prompt is output, the first position of the undetected key part on the target exhibit is acquired, the current pose of the mobile terminal is acquired, and the shooting pose required to be adjusted by the mobile terminal is obtained through analysis based on the first position and the current pose.
In some disclosed embodiments, the identification module 42 is further configured to display the AR indicator in the new shooting pose in response to the user adjusting the mobile terminal to the new shooting pose and identifying the target exhibit in the new shooting picture of the mobile terminal.
Therefore, the mobile terminal is adjusted to a new shooting pose by the user, the target exhibit is identified in a new shooting picture of the mobile terminal, and the AR indication mark under the new shooting pose is displayed, so that the AR indication mark can be supported to follow the display after the shooting pose is adjusted by the user, and further the interaction experience of exhibit guiding can be favorably improved.
In some disclosed embodiments, the exhibit guide apparatus 40 further comprises a second prompt module for outputting a second prompt in response to the presence of an associated exhibit in the exhibition hall that is related to the target exhibit; the second prompt is used for prompting the user to visit the associated exhibit.
Therefore, the second prompt is output in response to the fact that the related exhibit related to the target exhibit exists in the exhibition hall, and the second prompt is used for prompting the user to visit the related exhibit, namely when the related exhibit related to the target exhibit currently visited by the user exists in the exhibition hall, the user is prompted to visit the related exhibit by outputting the second prompt, so that the guiding requirement of the user on the interested exhibit can be met as much as possible, and the interaction experience of exhibition guiding can be improved.
In some disclosed embodiments, the exhibit guiding apparatus 40 further includes a second obtaining module, configured to, in response to receiving a confirmation instruction of the user regarding visiting the associated exhibit, obtain a second position of the associated exhibit in the exhibition hall, and obtain a current pose of the mobile terminal; the exhibit guiding device 40 further comprises a navigation module for displaying an AR navigation mark on the current screen of the mobile terminal based on the second position and the current pose.
Therefore, after the second prompt is output, the second position of the associated exhibit in the exhibition hall is obtained in response to receiving a confirmation instruction of the user about visiting the associated exhibit, and the current pose of the mobile terminal is obtained, so that the AR navigation mark is displayed on the current picture of the mobile terminal based on the second position and the current pose, and therefore after the user determines to visit the associated exhibit, the user can be navigated by displaying the AR navigation mark on the current picture of the mobile terminal in combination with the second position of the associated exhibit in the exhibition hall and the current pose of the mobile terminal, and the interactive experience of exhibit guide is favorably improved.
Referring to fig. 5, fig. 5 is a schematic block diagram of a mobile terminal 50 according to an embodiment of the present disclosure. The mobile terminal 50 comprises a camera 51, a display screen 52, a memory 53 and a processor 54, wherein the camera 51, the display screen 52 and the memory 53 are respectively coupled to the processor 54, and the processor 54 is configured to execute program instructions stored in the memory 53 to implement the steps in any of the above embodiments of the exhibit guiding method. Specifically, the mobile terminal 50 may include, but is not limited to: a mobile phone, a tablet computer, smart glasses, etc., which are not limited herein.
Specifically, the processor 54 is configured to control itself, the camera 51, the display 52 and the memory 53 to implement the steps of any one of the above embodiments of the exhibit guiding method. The processor 54 may also be referred to as a CPU (Central Processing Unit). The processor 54 may be an integrated circuit chip having signal processing capabilities. The Processor 54 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 54 may be collectively implemented by an integrated circuit chip.
Above-mentioned scheme, at the exhibit guide in-process, the user can shoot the exhibit through mobile terminal and discern to under the condition of discerning for the target exhibit, the key position through the target exhibit shows AR instruction sign and comes to carry out interactive interaction with the user on shooting the picture image position, thereby can realize the exhibit guide through the interactive form that triggers the explanation interactively, and promote explanation efficiency, and then can promote guide experience.
Referring to fig. 6, fig. 6 is a block diagram illustrating an embodiment of a computer readable storage medium 60 according to the present application. The computer readable storage medium 60 stores program instructions 601 capable of being executed by the processor, the program instructions 601 being for implementing the steps of any of the above-described embodiments of the exhibit navigation method.
Above-mentioned scheme, at the exhibit guide in-process, the user can shoot the exhibit through mobile terminal and discern to under the condition of discerning for the target exhibit, the key position through the target exhibit shows AR instruction sign and comes to carry out interactive interaction with the user on shooting the picture image position, thereby can realize the exhibit guide through the interactive form that triggers the explanation interactively, and promote explanation efficiency, and then can promote guide experience.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is only one type of logical division, and other divisions may be implemented in practice, for example, the unit or component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or an identifier, a marker, or a sand table, a display area, a display item, etc. associated with an object, or a venue. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like.
The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.

Claims (12)

1. A method for guiding exhibits, comprising:
responding to the fact that a target exhibit is identified in a shooting picture of a mobile terminal, detecting the shooting picture, and obtaining the image position of a key part of the target exhibit in the shooting picture; the target exhibit is an exhibit which is preset with explanation data, and the explanation data comprises explanation information of each key part of the target exhibit;
displaying an AR indicator at the image location;
and responding to the AR indication mark triggered by the user, and outputting the explanation information of the key part.
2. The method according to claim 1, wherein before the step of detecting the shot picture in response to identifying the target exhibit in the shot picture of the mobile terminal, obtaining the image position of the key part of the target exhibit in the shot picture, the method further comprises:
matching the shot picture with each exhibit model respectively to obtain a matching result; wherein the matching result comprises: matching degrees of the current exhibit in the shooting picture with the exhibit models respectively;
analyzing based on the matching result to obtain the recognition result of the shot picture; wherein the recognition result comprises: whether the current exhibit is the target exhibit.
3. The method according to claim 2, wherein the analyzing based on the matching result to obtain the recognition result of the captured image comprises:
and in response to the fact that the maximum matching degree is higher than a preset threshold value, taking the exhibit to which the exhibit model corresponding to the maximum matching degree belongs as the target exhibit, and determining that the identification result comprises that the current exhibit is the target exhibit.
4. The method of claim 2, wherein the exhibit to which the exhibit model belongs comprises a plurality of the key parts, and the exhibit model is marked with the explanation information of the key parts at the model positions corresponding to the key parts.
5. The method of any one of claims 1 to 4, wherein the target exhibit has a plurality of the key sites, the method further comprising:
in response to the presence of an undetected critical part, outputting a first prompt; the first prompt is used for prompting to adjust the shooting pose of the mobile terminal so as to shoot the undetected key part.
6. The method of claim 5, wherein prior to the outputting the first prompt, the method further comprises:
acquiring a first position of the undetected key part on the target exhibit, and acquiring a current pose of the mobile terminal;
and analyzing based on the first position and the current pose to obtain the shooting pose of the mobile terminal to be adjusted.
7. The method of claim 5, further comprising:
and responding to the situation that a user adjusts the mobile terminal to a new shooting pose and identifies the target exhibit in a new shooting picture of the mobile terminal, and displaying the AR indication mark under the new shooting pose.
8. The method of any one of claims 1 to 7, further comprising:
responding to the existence of the related exhibit related to the target exhibit in the exhibition hall, and outputting a second prompt; and the second prompt is used for prompting the user to visit the associated exhibit.
9. The method of claim 8, wherein after the outputting the second prompt, the method further comprises:
in response to receiving a confirmation instruction of a user about visiting the associated exhibit, acquiring a second position of the associated exhibit in the exhibition hall, and acquiring a current pose of the mobile terminal;
and displaying an AR navigation mark on the current picture of the mobile terminal based on the second position and the current pose.
10. An exhibit guide device, comprising:
the mobile terminal comprises a detection module, a display module and a display module, wherein the detection module is used for responding to the identification of a target exhibit in a shot picture of the mobile terminal and detecting the shot picture to obtain the image position of a key part of the target exhibit in the shot picture; the target exhibit is an exhibit which is preset with explanation data, and the explanation data comprises explanation information of each key part of the target exhibit;
an identification module for displaying an AR indicator at the image location;
and the interaction module is used for responding to the AR indication mark triggered by the user and outputting the explanation information of the key part.
11. A mobile terminal comprising a camera, a display, a memory and a processor, wherein the camera, the display and the memory are respectively coupled to the processor, and the processor is configured to execute program instructions stored in the memory to implement the exhibit guiding method according to any one of claims 1 to 9.
12. A computer-readable storage medium having stored thereon program instructions, which when executed by a processor, implement the exhibit guide method of any one of claims 1 to 9.
CN202210989319.6A 2022-08-17 2022-08-17 Exhibit guide method and related device, mobile terminal and storage medium Pending CN115345927A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210989319.6A CN115345927A (en) 2022-08-17 2022-08-17 Exhibit guide method and related device, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210989319.6A CN115345927A (en) 2022-08-17 2022-08-17 Exhibit guide method and related device, mobile terminal and storage medium

Publications (1)

Publication Number Publication Date
CN115345927A true CN115345927A (en) 2022-11-15

Family

ID=83952393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210989319.6A Pending CN115345927A (en) 2022-08-17 2022-08-17 Exhibit guide method and related device, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN115345927A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115532A (en) * 2023-08-23 2023-11-24 广州一线展示设计有限公司 Exhibition stand intelligent control method and system based on Internet of things

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115532A (en) * 2023-08-23 2023-11-24 广州一线展示设计有限公司 Exhibition stand intelligent control method and system based on Internet of things
CN117115532B (en) * 2023-08-23 2024-01-26 广州一线展示设计有限公司 Exhibition stand intelligent control method and system based on Internet of things

Similar Documents

Publication Publication Date Title
CN109191369B (en) Method, storage medium and device for converting 2D picture set into 3D model
CN110232311B (en) Method and device for segmenting hand image and computer equipment
CN105391970B (en) The method and system of at least one image captured by the scene camera of vehicle is provided
CN110675487B (en) Three-dimensional face modeling and recognition method and device based on multi-angle two-dimensional face
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN107484428B (en) Method for displaying objects
US9305206B2 (en) Method for enhancing depth maps
EP2339507B1 (en) Head detection and localisation method
CN111783820A (en) Image annotation method and device
Shen et al. Virtual mirror rendering with stationary rgb-d cameras and stored 3-d background
CN106033601B (en) The method and apparatus for detecting abnormal case
US20010000025A1 (en) Method and apparatus for personnel detection and tracking
US20160093101A1 (en) Method And System For Generating A Three-Dimensional Model
WO2019035155A1 (en) Image processing system, image processing method, and program
WO2020134818A1 (en) Image processing method and related product
GB2520338A (en) Automatic scene parsing
Darrell et al. A virtual mirror interface using real-time robust face tracking
CN110222572A (en) Tracking, device, electronic equipment and storage medium
CN111008935A (en) Face image enhancement method, device, system and storage medium
EP4296947A1 (en) Calibration information determination method and apparatus, and electronic device
CN110832542A (en) Recognition processing device, recognition processing method, and program
CN113706373A (en) Model reconstruction method and related device, electronic equipment and storage medium
CN115345927A (en) Exhibit guide method and related device, mobile terminal and storage medium
US11080920B2 (en) Method of displaying an object
CN113361360B (en) Multi-person tracking method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination