CN111638797A - Display control method and device - Google Patents

Display control method and device Download PDF

Info

Publication number
CN111638797A
CN111638797A CN202010509098.9A CN202010509098A CN111638797A CN 111638797 A CN111638797 A CN 111638797A CN 202010509098 A CN202010509098 A CN 202010509098A CN 111638797 A CN111638797 A CN 111638797A
Authority
CN
China
Prior art keywords
target
display object
building
virtual display
target building
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010509098.9A
Other languages
Chinese (zh)
Inventor
孙红亮
王子彬
李炳泽
潘思霁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Zhejiang Sensetime Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202010509098.9A priority Critical patent/CN111638797A/en
Publication of CN111638797A publication Critical patent/CN111638797A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a display control method and apparatus, including: acquiring a target image including a target building; determining a target virtual display object matched with the target building in the target image, and controlling an Augmented Reality (AR) device to display AR data obtained by fusing the target virtual display object and a real scene where the target building is located; responding to a preset trigger instruction of a user for the target virtual display object, and controlling AR equipment to display the AR animation special effect corresponding to the target virtual display object.

Description

Display control method and device
Technical Field
The disclosure relates to the technical field of computers, in particular to a display control method and device.
Background
In the related art, in order to improve the display effect, a part of buildings generally attract users to watch the buildings in a mode of displaying pictures or videos related to the buildings in front of the buildings, however, in this mode, on one hand, the displayed pictures or videos are easy to be ignored by the users, the display effect is not good, on the other hand, the displayed pictures or videos need a certain space for displaying, and when the space is insufficient, the displayed pictures or videos may not be displayed.
Disclosure of Invention
The embodiment of the disclosure at least provides a display control method and a display control device.
In a first aspect, an embodiment of the present disclosure provides a display control method, including:
acquiring a target image including a target building;
determining a target virtual display object matched with the target building in the target image, and controlling an Augmented Reality (AR) device to display AR data obtained by fusing the target virtual display object and a real scene where the target building is located;
responding to a preset trigger instruction of a user for the target virtual display object, and controlling AR equipment to display the AR animation special effect corresponding to the target virtual display object.
By the method, the target virtual display object matched with the target building can be displayed after the target image comprising the target building is acquired, and the display object is virtual, so that the method is not limited by space; the user can control the AR animation special effect corresponding to the target virtual display object by sending a preset trigger instruction, so that the interaction between the user and the target building is increased, the display mode is enriched, and the display effect is improved.
In one possible embodiment, the determining a target virtual representation object matching the target building in the target image includes:
extracting attribute features of the target building based on the target image;
and selecting a virtual object matched with the attribute characteristics of the target building from all the virtual objects as the target virtual display object.
In one possible embodiment, the attribute feature of the target building comprises at least one of:
size information, appearance information, construction information, color information, architectural style, architectural year, geographical location characteristics.
In a possible implementation, the extracting, based on the target image, attribute features of the target building includes:
and inputting the target image into a trained feature extraction network to obtain the attribute features of the target building, wherein the feature extraction network is obtained by training based on a sample image carrying an attribute feature label.
In one possible embodiment, the attribute feature of the target building comprises a geographic location feature;
determining a geographical location characteristic of the target building according to the following method:
determining parameter information of a camera for acquiring the target image;
determining a transformation matrix of an image coordinate system and a world coordinate system based on the parameter information;
determining the position coordinates of the target building under the image coordinate system;
and determining the geographical position characteristics of the target building based on the position coordinates of the target building in the image coordinate system and the transformation matrix.
In a possible embodiment, the preset trigger instruction includes a limb posture trigger instruction and/or a sound trigger instruction.
In a possible implementation manner, the controlling, in response to a preset trigger instruction of a user for the target virtual display object, an AR device to display an AR animation special effect corresponding to the target virtual display object includes:
after receiving a target trigger instruction in a plurality of preset trigger instructions initiated by a user, selecting a target AR animation special effect type corresponding to the target trigger instruction according to a mapping relation between the plurality of preset trigger instructions and the AR animation special effect type;
and controlling the AR equipment to display the AR animation special effect under the target AR animation special effect type corresponding to the target virtual display object.
In a second aspect, an embodiment of the present disclosure further provides a display control apparatus, including:
an acquisition module for acquiring a target image including a target building;
the determining module is used for determining a target virtual display object matched with the target building in the target image and controlling AR equipment to display AR data obtained by fusing the target virtual display object and a real scene where the target building is located;
and the control module is used for responding to a preset trigger instruction of a user aiming at the target virtual display object and controlling the AR equipment to display the AR animation special effect corresponding to the target virtual display object.
In one possible embodiment, the determining module, when determining the target virtual representation object matching the target building in the target image, is configured to:
extracting attribute features of the target building based on the target image;
and selecting a virtual object matched with the attribute characteristics of the target building from all the virtual objects as the target virtual display object.
In one possible embodiment, the attribute feature of the target building comprises at least one of:
size information, appearance information, construction information, color information, architectural style, architectural year, geographical location characteristics.
In a possible embodiment, the determining module, when extracting the attribute feature of the target building based on the target image, is configured to:
and inputting the target image into a trained feature extraction network to obtain the attribute features of the target building, wherein the feature extraction network is obtained by training based on a sample image carrying an attribute feature label.
In one possible embodiment, the attribute feature of the target building comprises a geographic location feature;
the determination module is further configured to determine the geographical location characteristic of the target building according to the following method:
determining parameter information of a camera for acquiring the target image;
determining a transformation matrix of an image coordinate system and a world coordinate system based on the parameter information;
determining the position coordinates of the target building under the image coordinate system;
and determining the geographical position characteristics of the target building based on the position coordinates of the target building in the image coordinate system and the transformation matrix.
In a possible embodiment, the preset trigger instruction includes a limb posture trigger instruction and/or a sound trigger instruction.
In a possible implementation manner, when, in response to a preset trigger instruction for the target virtual display object by a user, controlling the AR device to display an AR animation special effect corresponding to the target virtual display object, the control module is configured to:
after receiving a target trigger instruction in a plurality of preset trigger instructions initiated by a user, selecting a target AR animation special effect type corresponding to the target trigger instruction according to a mapping relation between the plurality of preset trigger instructions and the AR animation special effect type;
and controlling the AR equipment to display the AR animation special effect under the target AR animation special effect type corresponding to the target virtual display object.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a presentation control method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating an architecture of a presentation control apparatus according to an embodiment of the present disclosure;
fig. 3 shows a schematic structural diagram of a computer device 300 provided by the embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In the related art, when a picture or a video related to a building is displayed, on one hand, a display space is needed for displaying, and on the other hand, the display method may be ignored by a user, and the display effect is poor.
Based on the research, the present disclosure provides a display control method and apparatus, which may display a target virtual display object matched with a target building after acquiring a target image including the target building, and the display object is virtual, so the method may not be limited by space; the user can control the AR animation special effect corresponding to the target virtual display object by sending a preset trigger instruction, so that the interaction between the user and the target building is increased, the display mode is enriched, and the display effect is improved.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In order to facilitate understanding of the present embodiment, a detailed description is first given of a display control method disclosed in the embodiments of the present disclosure, and an execution subject of the display control method provided in the embodiments of the present disclosure is generally a server.
The following describes the presentation control method provided by the embodiment of the present disclosure by taking an execution subject as a server.
Referring to fig. 1, a flowchart of a display control method provided in the embodiment of the present disclosure is shown, where the method includes steps 101 to 103, where:
step 101, a target image comprising a target building is acquired.
The method comprises the steps that a target image including a target building is obtained through an AR device, the AR device can collect the image in real time and then send the image to a server, the server can detect the image to determine whether the image includes the target building, and if the image includes the target building, the image can be determined to be the target image.
In another possible implementation, the position information of the user can be detected in real time through the AR device, and when it is detected that the user enters a target position area, an image collected by the AR device is detected to perform detection, where the target position area is a position area where a target building can be shot, and if the user does not enter the target position area, the image shot by the AR device does not necessarily include the target building before the user enters the target position area, so that the detection on the images is not needed to reduce the amount of calculation.
Step 102, determining a target virtual display object matched with the target building in the target image, and controlling an augmented reality AR device to display AR data obtained by fusing the target virtual display object and a real scene where the target building is located.
In a possible implementation manner, when determining a target virtual display object matched with the target building in the target image, the attribute feature of the target building may be extracted based on the target image; and then selecting a virtual object matched with the attribute characteristics of the target building from all the virtual objects as the target virtual display object.
Wherein the attribute characteristics of the target building include at least one of:
size information, appearance information, construction information, color information, architectural style, architectural year, geographical location characteristics.
The size information may be a real size of the target building, and may specifically include a length and a width of the target building. When determining the size information of the target building, the pixel size information of the target building in the target image may be determined, and then the ratio between the preset pixel size and the real size may be determined, based on the pixel size information of the target building in the target image, to determine the real size information of the target building.
The external shape information of the target building may include an outline of the target building, and the construction information of the target building may include the number of floors, the floor height, and the like of the target building.
The geographical position characteristics of the target building can comprise position coordinates of the target building in a world coordinate system, and when the geographical position characteristics of the target building are determined, parameter information of a camera for acquiring the target image can be determined firstly; then determining a transformation matrix of an image coordinate system and a world coordinate system based on the parameter information; then determining the position coordinates of the target building under the image coordinate system; and finally, determining the geographical position characteristics of the target building based on the position coordinates of the target building in the image coordinate system and the transformation matrix.
The parameter information of the camera may include internal parameter information, external parameter information, distortion parameters, and the like, and when the parameter information of the camera for acquiring the target image is determined, the parameter information may be determined by a method for calibrating the camera.
In a possible implementation manner, when extracting the attribute features of the target building based on the target image, the target image may be input into a trained feature extraction network to obtain the attribute features of the target building, where the feature extraction network is trained based on a sample image carrying an attribute feature tag.
Specifically, when the feature extraction network is trained, the sample image can be input into the feature extraction network to be trained, the feature extraction network can output the predicted attribute feature of the target building, then the loss value in the current training process is determined based on the predicted attribute feature and the attribute feature label of the target building, when the loss value does not meet the preset condition, the network parameter of the feature extraction network in the current training process is adjusted, and the training process is executed again until the determined loss value meets the preset condition.
In another possible implementation, the corresponding relationship between each building and the virtual object may be directly set, and in determining the target virtual building matched with the target building in the target image, the target virtual building matched with the target building may be searched according to the corresponding relationship.
If the target virtual building matched with the target building is not found according to the corresponding relationship, in another example of the present disclosure, a similarity between the target building and each building in the corresponding relationship may be calculated, and then the virtual object corresponding to the building with the highest similarity is determined as the target virtual display object corresponding to the target building.
In specific implementation, after a target virtual display object matched with a target virtual building in a target image is determined, a display position of the target virtual display object can be determined, then according to the display position, the target virtual display object and a display scene where the target building is located are fused to generate AR data, and the generated AR data are displayed through an augmented reality AR device.
When the display position of the target virtual display object is determined, in a possible implementation manner, the display position of the target virtual display object may be determined according to geographic information features of the target building and a preset relative position relationship between the target building and the target virtual display object.
The relative position relationship between the target building and the target virtual display object can be a coordinate difference of the target building and the target virtual display object in a world coordinate system.
103, responding to a preset trigger instruction of the user for the target virtual display object, and controlling the AR equipment to display the AR animation special effect corresponding to the target virtual display object.
The preset trigger instruction comprises a limb posture trigger instruction and/or a sound trigger instruction.
The method includes the steps that a user responds to a limb posture trigger instruction of a target virtual display object, AR equipment is controlled to display an AR animation special effect corresponding to the target virtual display object, after the target virtual display object is displayed, a video including the target user is obtained, the limb posture of the target user in the video is detected, when the limb posture of the target user is detected to be consistent with a preset limb posture, a limb posture trigger instruction is generated and sent to a server, and the AR equipment is controlled to display the AR animation special effect corresponding to the target virtual display object according to the limb posture trigger instruction.
The method includes the steps that a sound trigger instruction of a user for a target virtual display object is responded, AR equipment is controlled to display an AR animation special effect corresponding to the target virtual display object, sound information collected by the AR equipment can be acquired after the target virtual display object is displayed, then voice recognition is conducted on the sound information, if target voice is detected in a recognition result, a sound trigger instruction can be generated and sent to a server, and the server can control the AR equipment to display the AR animation special effect corresponding to the target virtual display object according to the sound trigger instruction.
In a specific implementation, when responding to a preset trigger instruction of a user for the target virtual display object and controlling the AR device to display the AR animation special effect corresponding to the target virtual display object, after receiving a target trigger instruction in multiple preset trigger instructions initiated by the user, selecting a target AR animation special effect type corresponding to the target trigger instruction according to a mapping relationship between the multiple preset trigger instructions and the AR animation special effect types; and then controlling the AR equipment to display the AR animation special effect under the target AR animation special effect type corresponding to the target virtual display object.
The AR animation special effect corresponding to the target virtual display object may be to dynamically display the target virtual display object in different states, for example, if the target virtual object is an elephant, the AR animation special effect corresponding to the target virtual display object may be to swing a tail by the elephant.
Each target virtual display object may have a plurality of corresponding AR animation special effects, and the trigger instructions of the plurality of AR display special effects corresponding to the same target virtual display object may be different.
The controlling the AR device to display the AR animation special effect corresponding to the target virtual display object may be controlling the AR device to display an AR animation special effect corresponding to the target virtual display object and the preset trigger instruction.
Continuing the above example, if a preset trigger instruction corresponding to the elephant tail throwing is received, the AR equipment is controlled to display the AR animation special effect of the elephant tail throwing, and if the preset trigger instruction corresponding to the elephant nose throwing is received, the AR equipment is controlled to display the AR animation special effect of the elephant nose throwing.
In another possible implementation manner, there may be a plurality of target virtual display objects matched with the target building, and the preset trigger instructions corresponding to different target virtual display objects may be different, so that when responding to the preset trigger instruction of the user for the target virtual display object and controlling the AR device to display the AR animation special effect corresponding to the target virtual display, the target virtual display object corresponding to the preset trigger instruction of the target user may be determined from the plurality of target virtual display objects currently displayed, and then the AR device is controlled to display the AR animation special effect corresponding to the target virtual display object corresponding to the preset trigger instruction of the target user.
For example, if the target virtual display object matched with the target building may include a elephant and a bird, after receiving a preset trigger instruction of a user, it may be determined whether the target virtual display object matched with the preset trigger instruction is the elephant or the bird, and if the target virtual display object matched with the preset trigger instruction is the elephant, an AR animation special effect corresponding to the elephant is displayed.
In another possible implementation manner of the present disclosure, the target virtual display object may be stopped being displayed in response to a restore instruction for the target virtual display object. The restoring instruction may be an instruction generated by the user by triggering a restoring button on the AR device, or may be a preset default instruction, where the restoring instruction may also include a limb posture instruction and/or a sound instruction, and a method for specifically detecting whether the user triggers the restoring instruction is the same as a method for detecting whether the user triggers the preset triggering instruction, which will not be described herein again.
By the method, the target virtual display object matched with the target building can be displayed after the target image comprising the target building is acquired, and the display object is virtual, so that the method is not limited by space; the user can control the AR animation special effect corresponding to the target virtual display object by sending a preset trigger instruction, so that the interaction between the user and the target building is increased, the display mode is enriched, and the display effect is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, a display control device corresponding to the display control method is also provided in the embodiments of the present disclosure, and since the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the display control method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 2, a schematic diagram of an architecture of a display control apparatus according to an embodiment of the present disclosure is shown, where the apparatus includes: an acquisition module 201, a determination module 202 and a control module 203; wherein the content of the first and second substances,
an acquisition module 201 for acquiring a target image including a target building;
a determining module 202, configured to determine a target virtual display object that is matched with the target building in the target image, and control an Augmented Reality (AR) device to display AR data obtained by fusing the target virtual display object and a real scene where the target building is located;
the control module 203 is configured to respond to a preset trigger instruction of a user for the target virtual display object, and control the AR device to display an AR animation special effect corresponding to the target virtual display object.
In one possible embodiment, the determining module 202, when determining the target virtual representation object matching the target building in the target image, is configured to:
extracting attribute features of the target building based on the target image;
and selecting a virtual object matched with the attribute characteristics of the target building from all the virtual objects as the target virtual display object.
In one possible embodiment, the attribute feature of the target building comprises at least one of:
size information, appearance information, construction information, color information, architectural style, architectural year, geographical location characteristics.
In a possible implementation, the determining module 202, when extracting the attribute feature of the target building based on the target image, is configured to:
and inputting the target image into a trained feature extraction network to obtain the attribute features of the target building, wherein the feature extraction network is obtained by training based on a sample image carrying an attribute feature label.
In one possible embodiment, the attribute feature of the target building comprises a geographic location feature;
the determining module 202 is further configured to determine the geographical location characteristic of the target building according to the following method:
determining parameter information of a camera for acquiring the target image;
determining a transformation matrix of an image coordinate system and a world coordinate system based on the parameter information;
determining the position coordinates of the target building under the image coordinate system;
and determining the geographical position characteristics of the target building based on the position coordinates of the target building in the image coordinate system and the transformation matrix.
In a possible embodiment, the preset trigger instruction includes a limb posture trigger instruction and/or a sound trigger instruction.
In a possible implementation manner, when, in response to a preset trigger instruction for the target virtual display object by a user, controlling an AR device to display an AR animation special effect corresponding to the target virtual display object, the control module 203 is configured to:
after receiving a target trigger instruction in a plurality of preset trigger instructions initiated by a user, selecting a target AR animation special effect type corresponding to the target trigger instruction according to a mapping relation between the plurality of preset trigger instructions and the AR animation special effect type;
and controlling the AR equipment to display the AR animation special effect under the target AR animation special effect type corresponding to the target virtual display object.
By the device, the target virtual display object matched with the target building can be displayed after the target image comprising the target building is acquired, and the display object is virtual, so that the method is not limited by space; the user can control the AR animation special effect corresponding to the target virtual display object by sending a preset trigger instruction, so that the interaction between the user and the target building is increased, the display mode is enriched, and the display effect is improved.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 3, a schematic structural diagram of a computer device 300 provided in the embodiment of the present disclosure includes a processor 301, a memory 302, and a bus 303. The memory 302 is used for storing execution instructions and includes a memory 3021 and an external memory 3022; the memory 3021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 301 and data exchanged with an external memory 3022 such as a hard disk, the processor 301 exchanges data with the external memory 3022 through the memory 3021, and when the computer device 300 is operated, the processor 301 communicates with the memory 302 through the bus 303, so that the processor 301 executes the following instructions:
acquiring a target image including a target building;
determining a target virtual display object matched with the target building in the target image, and controlling an Augmented Reality (AR) device to display AR data obtained by fusing the target virtual display object and a real scene where the target building is located;
responding to a preset trigger instruction of a user for the target virtual display object, and controlling AR equipment to display the AR animation special effect corresponding to the target virtual display object.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the display control method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the display control method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the display control method described in the above method embodiments, which may be referred to specifically for the above method embodiments, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A display control method, comprising:
acquiring a target image including a target building;
determining a target virtual display object matched with the target building in the target image, and controlling an Augmented Reality (AR) device to display AR data obtained by fusing the target virtual display object and a real scene where the target building is located;
responding to a preset trigger instruction of a user for the target virtual display object, and controlling AR equipment to display the AR animation special effect corresponding to the target virtual display object.
2. The method of claim 1, wherein said determining a target virtual representation object that matches the target building in the target image comprises:
extracting attribute features of the target building based on the target image;
and selecting a virtual object matched with the attribute characteristics of the target building from all the virtual objects as the target virtual display object.
3. The method of claim 2, wherein the attribute feature of the target building comprises at least one of:
size information, appearance information, construction information, color information, architectural style, architectural year, geographical location characteristics.
4. The method of claim 2 or claim 3, wherein said extracting attribute features of the target building based on the target image comprises:
and inputting the target image into a trained feature extraction network to obtain the attribute features of the target building, wherein the feature extraction network is obtained by training based on a sample image carrying an attribute feature label.
5. The method according to any one of claims 2 to 4, wherein the attribute characteristics of the target building include a geographical location characteristic;
determining a geographical location characteristic of the target building according to the following method:
determining parameter information of a camera for acquiring the target image;
determining a transformation matrix of an image coordinate system and a world coordinate system based on the parameter information;
determining the position coordinates of the target building under the image coordinate system;
and determining the geographical position characteristics of the target building based on the position coordinates of the target building in the image coordinate system and the transformation matrix.
6. The method according to claim 1, wherein the preset trigger instruction comprises a limb posture trigger instruction and/or a voice trigger instruction.
7. The method according to claim 1, wherein the controlling, in response to a preset trigger instruction of the user for the target virtual display object, the AR device to display the AR animation special effect corresponding to the target virtual display object includes:
after receiving a target trigger instruction in a plurality of preset trigger instructions initiated by a user, selecting a target AR animation special effect type corresponding to the target trigger instruction according to a mapping relation between the plurality of preset trigger instructions and the AR animation special effect type;
and controlling the AR equipment to display the AR animation special effect under the target AR animation special effect type corresponding to the target virtual display object.
8. A display control apparatus, comprising:
an acquisition module for acquiring a target image including a target building;
the determining module is used for determining a target virtual display object matched with the target building in the target image and controlling AR equipment to display AR data obtained by fusing the target virtual display object and a real scene where the target building is located;
and the control module is used for responding to a preset trigger instruction of a user aiming at the target virtual display object and controlling the AR equipment to display the AR animation special effect corresponding to the target virtual display object.
9. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the presentation control method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the method according to any one of claims 1 to 7.
CN202010509098.9A 2020-06-07 2020-06-07 Display control method and device Pending CN111638797A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010509098.9A CN111638797A (en) 2020-06-07 2020-06-07 Display control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010509098.9A CN111638797A (en) 2020-06-07 2020-06-07 Display control method and device

Publications (1)

Publication Number Publication Date
CN111638797A true CN111638797A (en) 2020-09-08

Family

ID=72330064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010509098.9A Pending CN111638797A (en) 2020-06-07 2020-06-07 Display control method and device

Country Status (1)

Country Link
CN (1) CN111638797A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308977A (en) * 2020-10-29 2021-02-02 字节跳动有限公司 Video processing method, video processing apparatus, and storage medium
CN112308951A (en) * 2020-10-09 2021-02-02 深圳市大富网络技术有限公司 Animation production method, system, device and computer readable storage medium
CN113643432A (en) * 2021-08-20 2021-11-12 北京市商汤科技开发有限公司 Data editing method and device, computer equipment and storage medium
WO2022055421A1 (en) * 2020-09-09 2022-03-17 脸萌有限公司 Augmented reality-based display method, device, and storage medium
CN114390215A (en) * 2022-01-20 2022-04-22 脸萌有限公司 Video generation method, device, equipment and storage medium
CN114390214A (en) * 2022-01-20 2022-04-22 脸萌有限公司 Video generation method, device, equipment and storage medium
WO2022132033A1 (en) * 2020-12-18 2022-06-23 脸萌有限公司 Display method and apparatus based on augmented reality, and device and storage medium
RU2801917C1 (en) * 2020-09-09 2023-08-18 Бейджин Цзытяо Нетворк Текнолоджи Ко., Лтд. Method and device for displaying images based on augmented reality and medium for storing information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330439A (en) * 2017-07-14 2017-11-07 腾讯科技(深圳)有限公司 A kind of determination method, client and the server of objects in images posture
CN206743450U (en) * 2017-05-25 2017-12-12 成都川江信息技术有限公司 It is a kind of AR realtime graphics are provided building building cell display systems
CN110286773A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Information providing method, device, equipment and storage medium based on augmented reality
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN110716646A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206743450U (en) * 2017-05-25 2017-12-12 成都川江信息技术有限公司 It is a kind of AR realtime graphics are provided building building cell display systems
CN107330439A (en) * 2017-07-14 2017-11-07 腾讯科技(深圳)有限公司 A kind of determination method, client and the server of objects in images posture
CN110286773A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Information providing method, device, equipment and storage medium based on augmented reality
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN110716646A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method, device, equipment and storage medium

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022055421A1 (en) * 2020-09-09 2022-03-17 脸萌有限公司 Augmented reality-based display method, device, and storage medium
US11587280B2 (en) 2020-09-09 2023-02-21 Beijing Zitiao Network Technology Co., Ltd. Augmented reality-based display method and device, and storage medium
RU2801917C1 (en) * 2020-09-09 2023-08-18 Бейджин Цзытяо Нетворк Текнолоджи Ко., Лтд. Method and device for displaying images based on augmented reality and medium for storing information
CN112308951A (en) * 2020-10-09 2021-02-02 深圳市大富网络技术有限公司 Animation production method, system, device and computer readable storage medium
CN112308977A (en) * 2020-10-29 2021-02-02 字节跳动有限公司 Video processing method, video processing apparatus, and storage medium
CN112308977B (en) * 2020-10-29 2024-04-16 字节跳动有限公司 Video processing method, video processing device, and storage medium
WO2022132033A1 (en) * 2020-12-18 2022-06-23 脸萌有限公司 Display method and apparatus based on augmented reality, and device and storage medium
CN113643432A (en) * 2021-08-20 2021-11-12 北京市商汤科技开发有限公司 Data editing method and device, computer equipment and storage medium
CN114390215A (en) * 2022-01-20 2022-04-22 脸萌有限公司 Video generation method, device, equipment and storage medium
CN114390214A (en) * 2022-01-20 2022-04-22 脸萌有限公司 Video generation method, device, equipment and storage medium
CN114390215B (en) * 2022-01-20 2023-10-24 脸萌有限公司 Video generation method, device, equipment and storage medium
CN114390214B (en) * 2022-01-20 2023-10-31 脸萌有限公司 Video generation method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111638797A (en) Display control method and device
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN111638793B (en) Display method and device of aircraft, electronic equipment and storage medium
CN112348969A (en) Display method and device in augmented reality scene, electronic equipment and storage medium
JP2022505998A (en) Augmented reality data presentation methods, devices, electronic devices and storage media
CN112148197A (en) Augmented reality AR interaction method and device, electronic equipment and storage medium
CN110545442B (en) Live broadcast interaction method and device, electronic equipment and readable storage medium
CN112148189A (en) Interaction method and device in AR scene, electronic equipment and storage medium
CN111640197A (en) Augmented reality AR special effect control method, device and equipment
CN111643900A (en) Display picture control method and device, electronic equipment and storage medium
CN111651047A (en) Virtual object display method and device, electronic equipment and storage medium
CN112637665B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN112905014A (en) Interaction method and device in AR scene, electronic equipment and storage medium
CN111639613B (en) Augmented reality AR special effect generation method and device and electronic equipment
CN111651057A (en) Data display method and device, electronic equipment and storage medium
CN111667588A (en) Person image processing method, person image processing device, AR device and storage medium
CN111640169A (en) Historical event presenting method and device, electronic equipment and storage medium
CN111862341A (en) Virtual object driving method and device, display equipment and computer storage medium
CN111652971A (en) Display control method and device
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
CN112150349A (en) Image processing method and device, computer equipment and storage medium
CN112882576A (en) AR interaction method and device, electronic equipment and storage medium
CN112991555B (en) Data display method, device, equipment and storage medium
CN111652983A (en) Augmented reality AR special effect generation method, device and equipment
CN111651058A (en) Historical scene control display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination