CN111640190A - AR effect presentation method and apparatus, electronic device and storage medium - Google Patents

AR effect presentation method and apparatus, electronic device and storage medium Download PDF

Info

Publication number
CN111640190A
CN111640190A CN202010488809.9A CN202010488809A CN111640190A CN 111640190 A CN111640190 A CN 111640190A CN 202010488809 A CN202010488809 A CN 202010488809A CN 111640190 A CN111640190 A CN 111640190A
Authority
CN
China
Prior art keywords
picture
real scene
entity
virtual object
attribute information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010488809.9A
Other languages
Chinese (zh)
Inventor
孙红亮
李炳泽
武明飞
刘小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Zhejiang Sensetime Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202010488809.9A priority Critical patent/CN111640190A/en
Publication of CN111640190A publication Critical patent/CN111640190A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a presentation method, an apparatus, an electronic device and a storage medium of an AR effect, wherein the method comprises: acquiring a real scene image; determining attribute information of an entity picture displayed in a real scene based on the real scene image; acquiring virtual object information matched with the attribute information of the entity picture; and utilizing the virtual object information to control AR equipment to present the AR effect corresponding to the virtual object information in the associated area corresponding to the entity picture.

Description

AR effect presentation method and apparatus, electronic device and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a method and an apparatus for presenting an AR effect, an electronic device, and a storage medium.
Background
Augmented Reality (AR) technology superimposes entity information (visual information, sound, touch, etc.) on the real world after simulation, so that a real environment and a virtual object are presented on the same screen or space in real time. However, at present, the presentation mode of the AR effect is usually to superimpose a preset three-dimensional virtual model in a three-dimensional real space, for example, to superimpose a fixed three-dimensional virtual model of a vase effect on a dining table in a room, and this mode makes the presented AR effect single, resulting in poor presented AR effect.
Disclosure of Invention
The embodiment of the disclosure at least provides a presentation method and device of an AR effect, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a method for presenting an augmented reality AR effect, including:
acquiring a real scene image;
determining attribute information of an entity picture displayed in a real scene based on the real scene image;
acquiring virtual object information matched with the attribute information of the entity picture;
and utilizing the virtual object information to control AR equipment to present the AR effect corresponding to the virtual object information in the associated area corresponding to the entity picture.
In the embodiment of the disclosure, the attribute information of the real entity picture in the real scene can be identified based on the image of the real scene, the virtual object information matched with the attribute information of the entity picture can be acquired, and the virtual object information is utilized to present the corresponding AR effect in the area associated with the entity picture. Therefore, the display content of the virtual model can be overlapped with the picture content in the real entity picture, the traditional two-dimensional and closed three-dimensional display limitation is broken through, the vivid AR effect is presented on the entity picture or around the entity picture in the real scene, and the visual experience of the virtual model on the 'skip nature paper' is brought to the user. Not only can make the presentation effect of AR effect better, also promoted user's visual experience sense.
In some embodiments of the present disclosure, the determining attribute information of the physical picture displayed in the real scene based on the real scene image includes:
matching the real scene image with a reference image in a preset picture library, and determining a target reference image matched with the entity picture in the real scene;
and determining the attribute information corresponding to the target reference image as the attribute information of the entity picture.
In some embodiments of the present disclosure, the predetermined drawing library includes at least one set of reference images, and each set of reference images includes at least one reference image representing the same physical drawing.
In the embodiment of the disclosure, the attribute information can be determined directly by matching the reference images in the picture library through presetting the picture library and the corresponding relationship between the reference images and the attribute information. The determination mode is simple and convenient, and the attribute information can be determined by the user based on the presentation requirement of the actual AR effect, so that personalized setting is realized.
In some embodiments of the present disclosure, the determining attribute information of the physical picture displayed in the real scene based on the real scene image includes:
determining an image area where the entity picture in the real scene image is located by utilizing a pre-trained target detection model and the real scene image;
and identifying the image area where the entity picture is located, or identifying the image area where the entity picture is located by utilizing a pre-trained attribute detection model to obtain attribute information of the entity picture.
In the embodiment of the disclosure, the entity picture and the corresponding attribute information in the real scene image are realized by means of the neural network, and the accurate detection of the multi-dimensional attribute information can be realized.
In some embodiments of the present disclosure, the attribute information of the entity picture includes at least one of the following information: the style of the picture; picture tone; picture size; content depicted by a picture; and (5) identifying the picture.
In some embodiments of the present disclosure, the obtaining of the virtual object information matching with the attribute information of the entity picture includes:
acquiring virtual object information matched with the attribute information of the entity picture from a preset virtual object library; the virtual object library includes virtual object information corresponding to the at least one attribute information.
In the embodiment of the present disclosure, by presetting the virtual object library and the corresponding relationship between the attribute information and the virtual object, the virtual object matched with the attribute information can be determined directly by matching the virtual object in the virtual object library. The determination method is simple and convenient, and the virtual object can be autonomously selected by the user based on the presentation requirement of the actual AR effect, so that personalized setting is realized.
In some embodiments of the present disclosure, the associated area corresponding to the physical drawing is a first area where the physical drawing is located and/or a second area having a preset relative position relationship with the physical drawing.
In a second aspect, an embodiment of the present disclosure further provides a device for presenting an augmented reality AR effect, including:
the first acquisition module is used for acquiring a real scene image;
the determining module is used for determining attribute information of the entity picture displayed in the real scene based on the real scene image;
the second acquisition module is used for acquiring virtual object information matched with the attribute information of the entity picture;
and the presentation module is used for controlling AR equipment to present the AR effect corresponding to the virtual object information on the first area where the entity picture is located and/or the second area associated with the second entity picture by utilizing the virtual object information.
In some embodiments of the disclosure, the determining module, when determining attribute information of the physical picture displayed in the real scene based on the real scene image, is specifically configured to:
matching the real scene image with a reference image in a preset picture library, and determining a target reference image matched with the entity picture in the real scene;
and determining preset attribute information corresponding to the target reference image as attribute information of the entity picture.
In some embodiments of the present disclosure, the preset picture library includes at least one set of reference images, and each set of reference images includes images obtained by capturing a same physical picture in different shooting modes.
In some embodiments of the disclosure, the determining module, when determining attribute information of the physical picture displayed in the real scene based on the real scene image, is specifically configured to:
determining an image area where the entity picture in the real scene image is located by utilizing a pre-trained target detection model and the real scene image;
and identifying the image area where the entity picture is located, or identifying the image area where the entity picture is located by utilizing a pre-trained attribute detection model to obtain attribute information of the entity picture.
In some embodiments of the present disclosure, the attribute information of the entity picture includes at least one of the following information: the style of the picture; picture tone; picture size; content depicted by a picture; and (5) identifying the picture.
In some embodiments of the disclosure, the second obtaining module, when obtaining the virtual object information matched with the attribute information of the entity picture, is specifically configured to:
acquiring virtual object information matched with the attribute information of the entity picture from a preset virtual object library; the virtual object library includes virtual object information corresponding to the at least one attribute information.
In some embodiments of the present disclosure, the associated area corresponding to the physical drawing is a first area where the physical drawing is located and/or a second area having a preset relative position relationship with the physical drawing.
In a third aspect, this disclosure also provides an electronic device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, this disclosure also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
The method, the device, the electronic device and the storage medium provided by the embodiment of the disclosure can identify the attribute information of the real entity picture in the real scene based on the image of the real scene, can acquire the virtual object information matched with the attribute information of the entity picture, and can present the corresponding AR effect in the first area or the second area associated with the entity picture by using the virtual object information. Therefore, the display content of the virtual model can be overlapped with the picture content in the real entity picture, the traditional two-dimensional and closed three-dimensional display limitation is broken through, the vivid AR effect is presented on the entity picture or around the entity picture in the real scene, and the visual experience of the virtual model on the 'skip nature paper' is brought to the user. Not only can make the presentation effect of AR effect better, also promoted user's visual experience sense.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 illustrates a flowchart of a method for presenting an AR effect provided by an embodiment of the present disclosure;
FIG. 2A illustrates a presentation diagram of an AR effect provided by embodiments of the present disclosure;
FIG. 2B illustrates a rendering schematic of another AR effect provided by embodiments of the present disclosure;
FIG. 3 illustrates a schematic diagram of a presentation apparatus for AR effects provided by embodiments of the present disclosure;
fig. 4 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
The present disclosure may be applied to any electronic device (such as a mobile phone, a tablet, AR glasses, etc.) or a server supporting AR technology, or a combination thereof, where the present disclosure is applied to a server, the server may be connected to other electronic devices having a communication function and a camera, and the connection mode may be a wired connection or a Wireless connection, and the Wireless connection may be, for example, a bluetooth connection, a Wireless broadband (WIFI) connection, or the like.
For example, the presentation of the AR effect in the AR device may be understood as presenting a virtual object merged into the real scene in the AR device, and may be directly rendering the presentation content of the virtual object and merging with the real scene, for example, presenting a set of virtual tea set, where the display effect is placed on a real desktop in the real scene, or presenting a merged display picture after merging the presentation content of the virtual object with the image of the real scene. The specific selection of which presentation manner depends on the device type of the AR device and the adopted picture presentation technology, for example, generally, since a real scene (not an imaged real scene image) can be directly seen from the AR glasses, the AR glasses can adopt a presentation manner of directly rendering a presentation picture of a virtual object; for mobile terminal devices such as mobile phones and tablet computers, because the pictures formed by imaging the real scene are displayed in the mobile terminal devices, the AR effect can be displayed by fusing the real scene images and the presentation content of the virtual object.
A method for presenting an AR effect according to an embodiment of the present disclosure is described in detail below.
Referring to fig. 1, a schematic flow chart of a method for presenting an AR effect according to an embodiment of the present disclosure includes the following steps:
and S101, acquiring a real scene image.
In the embodiment of the disclosure, the presentation method of the AR effect can be applied to an AR device or a server. When the presentation method is applied to the AR device, an image acquisition device (such as a camera) in the AR device may be used to acquire a real scene image in a real scene, and the real scene image of a single frame may be acquired by shooting an image, or the real scene image of consecutive frames may be acquired by shooting a video. When the presentation method is applied to a server, the AR device or other electronic devices with image acquisition functions can send acquired single-frame or continuous multi-frame real scene images to the server. The present disclosure is not limited to the specific manner of image acquisition and the number of frames of the acquired image.
For example, the user may be placed in a certain exhibition hall, and in the exhibition hall, images of various exhibition halls or exhibits are collected in real time to view AR effects of the images of some exhibition halls or exhibits after the virtual objects are superimposed.
And S102, determining attribute information of the entity picture displayed in the real scene based on the real scene image.
The real scene image in the embodiment of the present disclosure is an image of a real scene captured by an AR device or other electronic devices. The image of the real scene may include at least one physical object of the real scene. For example, for the real scene image in the exhibition hall, the solid object included in the real scene image may be at least one exhibit in the exhibition hall, such as a solid drawing in the exhibition hall, or a solid sculpture, etc.
The entity picture in the embodiment of the present disclosure refers to a certain entity picture actually existing in a real scene represented by a real scene image. For example, the picture is a picture displayed in an exhibition hall, such as a landscape picture, or a portrait picture, etc. The carrier medium for the physical drawing may be in the form of paper, electronic screen, etc., and the carrier medium for the physical drawing is not limited in this disclosure. The screen content depicted in the physical drawing may be blank content, or may be actual screen content, and the specific form or style of the screen content, the depicted object, or the like, but the embodiment of the present disclosure is not limited thereto.
In the embodiment of the disclosure, the entity picture can have certain attribute information. Illustratively, the attribute information may include, but is not limited to, at least one of the following information: the style of the picture; picture tone; picture size; content depicted by a picture; and (5) identifying the picture.
The picture style can be divided in various ways. In one example, the painting can be classified according to different tool materials and painting techniques, and can be classified into styles of oil painting, water color painting, block painting and the like. In another example, the content may be divided into a portrait, a landscape, and the like.
The image color tone can also be divided in a plurality of ways, in one example, a color with a higher proportion can be used as the color tone of the picture based on the color composition in the image, for example, if the green color in the entity picture has a higher proportion, the green color can be used as the color tone of the picture. In another example, the solid picture may be distinguished by a warm tone or a cool tone, for example, if the solid picture is mostly composed of red or yellow, the tone may be marked as a warm tone.
The picture size, which may be the actual size of the physical picture, is determined according to the border size of the physical picture.
The contents drawn by the drawing are various, and the contents can be classified, and the category to which the contents belong can be used as the attribute information of the entity drawing. For example, if the drawing describes a person, the attribute information of the physical drawing, such as the elderly or young or children, the modern or historical person or cartoon person, can be determined based on the category of the person. If the drawing describes an animal, the attribute information of the entity drawing can be determined according to the category to which the animal belongs. Because of the variety of the content categories, the embodiments of the disclosure are not listed one by one.
The pictorial representation may be represented by a particular number or symbol. For example, the entity drawing may be marked with a corresponding entity identifier, and the entity identifier is identified as the drawing identifier of the entity drawing. For example, a plurality of physical pictures displayed on a wall surface of an exhibition hall are numbered, and each number is used as a picture identifier of a corresponding physical picture.
In some embodiments of the present disclosure, the following methods may be adopted to determine the attribute information of the entity picture displayed in the real scene:
matching a real scene image with a reference image in a preset picture library, and determining a target reference image matched with an entity picture in the real scene; and determining the preset attribute information corresponding to the target reference image as the attribute information of the entity picture.
When the presentation method is applied to the AR equipment, the AR equipment can complete the matching process locally or at the cloud. The AR equipment can upload the real scene image to a server at the cloud end, then the matching process is completed by the server, and a target reference image matched with the entity image returned by the server is received.
In the case that the presentation method is applied to a server, the server may match the reference image in the preset picture library with the image of the real scene to determine a matched target reference image.
The preset picture library can comprise at least one group of reference images, and each group of reference images comprises at least one reference image representing the same entity picture. Reference images contained in different groups of reference images represent different entity pictures, and reference images contained in the same group of reference images represent the same entity picture.
In order to realize accurate matching, when a picture library is constructed, reference images acquired after the same entity picture is acquired in different shooting modes can be added into a group of reference images representing the same entity picture. For example, different shooting modes may be shooting at different lighting intensities at different shooting angles.
For example, if 10 physical pictures need to be displayed in an exhibition hall of an exhibition hall, 10 groups of physical pictures can be added in a preset picture library, each physical picture is photographed in different photographing modes, and a plurality of reference images representing the physical pictures are respectively obtained and used as the reference images of the group corresponding to the physical picture.
In some embodiments of the disclosure, a corresponding relationship may be pre-established between each group of reference images in a preset image library and attribute information of an entity image represented by the group of reference images, where the entity image represented by each group of reference images may have at least one attribute information, and accordingly, the attribute information of the entity image corresponding to each group of reference images may be one or more.
For example, in the specific matching process, a reference image most similar to an entity picture photographed in a real scene may be found from a preset picture library through similarity comparison and the like, the most similar reference image is used as a target reference image matched with the entity picture in the real scene, and then attribute information corresponding to a group where the target reference image is located is determined as attribute information of the entity picture. For example, if the attribute information bound to the group where the target reference image matched with the entity picture in the real scene is located is a figure, it can be determined that the attribute information of the entity picture in the real scene is the figure.
And secondly, determining an image area where an entity picture in the real scene image is located by using the pre-trained target detection model and the real scene image, and identifying the image area where the entity picture is located.
The target detection model can be a neural network model, the image sample marked with the image area where the entity picture is located can be used for training the target detection model, and the trained target detection model can accurately identify the image area where the entity picture is located in a real scene.
For example, the identification of the image area where the physical drawing is located may be to identify the size of the image area, and the size of the physical drawing may be obtained based on the identification result. Alternatively, the identification of the entity drawing can also be identified, for example, the number of the entity drawing is identified, and the number of the entity drawing can be obtained as the identification of the entity drawing based on the identification result.
Determining an image area where an entity picture in a real scene image is located by using a pre-trained target detection model and the real scene image; and identifying the image area where the entity picture is positioned by utilizing a pre-trained attribute detection model to obtain attribute information of the entity picture.
In the third method, after obtaining the image area where the solid picture is located, the image area where the solid picture is located is subjected to prediction processing of attribute information of more dimensions by using a pre-trained attribute detection model, for example, the picture style, the picture tone, the attribute information of the content drawn by the picture, and the like of the solid picture can be predicted. Of course, the picture color tone, the picture style, the attribute information of the drawn content, and the like can be directly recognized in the second mode.
If the mode three is adopted, the image sample labeled with the attribute information of the entity picture in advance can be used for training the attribute detection model, and the trained attribute detection model can be used for accurately predicting the attribute information of the entity picture.
S103, acquiring virtual object information matched with the attribute information of the entity picture.
In the embodiment of the present disclosure, the virtual object may be a two-dimensional virtual object or a three-dimensional virtual object. The specific thing characterized by the virtual object may be determined based on the actual scene. For example, the virtual object may be any one or any combination of a virtual character, a virtual animal, a virtual article, a virtual building, a virtual plant, a virtual sticker, a virtual pictorial content, which the present disclosure is not limited to.
The virtual object information may be rendering parameters required for rendering the virtual object into a special effect, and may further include, for example, two-dimensional model parameters and/or three-dimensional model parameters of the virtual object. For example, in the case where the virtual object is a virtual character or a virtual animal, the model parameters of the virtual object information may include facial key points, limb key points, and the like of the virtual character or the virtual animal.
In the embodiment of the disclosure, the virtual object information matched with the attribute information of the entity picture can be acquired from a preset virtual object library; the virtual object library includes virtual object information corresponding to the at least one attribute information.
For example, a corresponding relationship between attribute information of the entity picture and virtual object information may be pre-established, and specifically, each attribute information may correspond to at least one piece of virtual object information, or at least one piece of attribute information may correspond to one piece of virtual object information. For example, when the attribute information is a character picture, the corresponding at least one type of virtual object information may be at least one of a facial makeup sticker, and decoration information.
When the presentation method is applied to the AR device, the virtual object information matched with the attribute information may be acquired locally or in a cloud, and the virtual object information may be stored locally or in the cloud accordingly. When the presentation method is applied to the server, the server can directly find the virtual object information matched with the attribute information from the stored virtual object information.
And S104, controlling AR equipment to present the AR effect corresponding to the virtual object information in the associated area corresponding to the entity picture by using the virtual object information.
When the presentation method is applied to the AR device, the AR device may directly render the virtual object information by using a rendering tool, and then present an AR effect corresponding to the virtual object information on a first area or a second area associated with the first area where the entity drawing is located. When the presentation method is applied to the server, the server can send the virtual object information to the AR device, the AR device completes the rendering and presentation processes, the server can also render the virtual object information by using a rendering tool, then the generated rendering picture is sent to the AR device, and then the AR device completes the presentation of the AR effect by using the rendering picture.
In the embodiment of the disclosure, the associated area corresponding to the entity drawing may be a first area where the entity drawing is located and/or a second area having a preset relative position relationship with the entity drawing.
The first area may be an area drawn by the physical drawing, and the rendering screen of the virtual object is superimposed in the first area, so that an AR effect may be presented inside the physical drawing.
The second area may be an area outside the area depicted by the physical drawing, and the area has a preset relative position relationship with the area where the physical drawing is located. Such as above or below or directly in front of the area where the physical drawing is located. The rendering picture of the virtual object is superposed in the second area, and the special effect of the virtual object can be matched on the basis of the picture content presented by the original entity picture.
Through the AR effect that this disclosed embodiment appears, reach the visual effect of a virtual object "jump on the nature paper" to effectively optimize the visual effect of original entity picture, promote user's the experience of watching. For example, referring to the presentation schematic diagram of the AR effect shown in fig. 2A, the solid picture may be a piece of blank white paper, and the virtual objects are black "children" and "stones", and with the presentation method of the AR effect provided by the embodiment of the present disclosure, after the white paper is acquired based on the real scene image, attribute information of the white paper may also be further determined, and then corresponding virtual object information is found based on the attribute information, and a visual effect of "children standing on stones" is presented on the white paper through operations such as rendering, so as to implement presentation of the AR effect of "on skips".
The presentation method of the AR effect provided by the embodiment of the disclosure can realize the superposition of the display content of the virtual model and the picture content in the real entity picture, break through the traditional two-dimensional and closed three-dimensional display limitation, present a realistic AR effect on the entity picture or around the entity picture in the real scene, and bring a visual experience of the virtual model on "skips" to the user. Not only can make the presentation effect of AR effect better, also promoted user's visual experience sense.
The following is an illustration of a specific application scenario of the disclosed embodiments.
First, a paper drawing library (corresponding to the preset drawing library) is established.
Scanning an entity picture displayed in an exhibition hall, storing an image obtained by scanning as a reference image into a cloud or a local place, and inputting attribute information of the displayed picture, wherein the attribute information is used as a label (tag) of each scanned reference image or each group of reference images.
Then, in the application phase, the physical picture to be overlaid may be scanned by using a mobile portable device with a camera, such as a mobile phone, and at this time, video frame data or picture data captured by the camera is sent to the server in the cloud. The server can match the previously established drawing library after receiving the data, and transmits the tag of the target reference image after the matching is successful back to the client.
Further, after receiving the tag, the client may download or locally read the model parameter of the virtual object corresponding to the tag from the cloud, and present the AR effect superimposed with the virtual object in the area where the entity picture is located or the peripheral area.
If a picture needs to display the AR effect of the dinosaur after being scanned by the mobile phone, the picture and corresponding dinosaur information can be first entered into the picture library in the cloud. In order to improve the identification efficiency, the images of the picture under different illumination and different shooting angles can be collected as reference images in the collection process, in addition, model parameters of the dinosaur under various forms can also be collected, and a unique identifier (id) is allocated to the model parameters of the dinosaur under each form. The identifier (id) and the tag (tag) may be associated with a pre-recorded number.
If a user wants to scan a picture to form a dinosaur effect, the picture can be scanned in the above manner, after the server successfully matches the picture with a picture in a preset picture library, a tag corresponding to the picture can be identified, and then a corresponding id is found based on the tag, so that a model parameter of the dinosaur corresponding to the id can be taken out from a cloud or locally, and then the corresponding dinosaur effect can be presented on the picture, for example, a special effect (represented by reference numeral 20 shown in fig. 2B) of "dinosaur" is presented on a picture with plants (represented by reference numeral 10 shown in fig. 2B) shown in fig. 2B, so that the dinosaur "skips on paper" feeling is realized.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same technical concept, an AR effect presenting device corresponding to the AR effect presenting method is also provided in the embodiments of the present disclosure, and since the principle of the device in the embodiments of the present disclosure to solve the problem is similar to the AR effect presenting method described above in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 3, a schematic diagram of an apparatus for presenting an AR effect according to an embodiment of the present disclosure is shown, where the apparatus includes: a first obtaining module 31, a determining module 32, a second obtaining module 33 and a presenting module 34.
A first obtaining module 31, configured to obtain a real scene image;
a determining module 32, configured to determine attribute information of the entity picture displayed in the real scene based on the real scene image;
a second obtaining module 33, configured to obtain virtual object information matched with the attribute information of the entity picture;
and a presenting module 34, configured to control, by using the virtual object information, the AR device to present an AR effect corresponding to the virtual object information in an associated area corresponding to the entity picture.
In some embodiments of the present disclosure, the determining module 32, when determining the attribute information of the physical drawing displayed in the real scene based on the real scene image, is specifically configured to:
matching the real scene image with a reference image in a preset picture library, and determining a target reference image matched with the entity picture in the real scene;
and determining preset attribute information corresponding to the target reference image as attribute information of the entity picture.
In some embodiments of the present disclosure, the preset picture library includes at least one set of reference images, and each set of reference images includes images obtained by capturing a same physical picture in different shooting modes.
In some embodiments of the present disclosure, the determining module 32, when determining the attribute information of the physical drawing displayed in the real scene based on the real scene image, is specifically configured to:
determining an image area where the entity picture in the real scene image is located by utilizing a pre-trained target detection model and the real scene image;
and identifying the image area where the entity picture is located, or identifying the image area where the entity picture is located by utilizing a pre-trained attribute detection model to obtain attribute information of the entity picture.
In some embodiments of the present disclosure, the attribute information of the entity picture includes at least one of the following information: the style of the picture; picture tone; picture size; content depicted by a picture; and (5) identifying the picture.
In some embodiments of the disclosure, the second obtaining module 33, when obtaining the virtual object information matching with the attribute information of the entity picture, is specifically configured to:
acquiring virtual object information matched with the attribute information of the entity picture from a preset virtual object library; the virtual object library includes virtual object information corresponding to the at least one attribute information.
In some embodiments, the associated area corresponding to the entity drawing is a first area where the entity drawing is located and/or a second area having a preset relative position relationship with the entity drawing.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 4, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes:
a processor 11 and a memory 12; the memory 12 stores machine-readable instructions executable by the processor 11, which when executed by a computer device are executed by the processor to perform the steps of:
acquiring a real scene image;
determining attribute information of an entity picture displayed in a real scene based on the real scene image;
acquiring virtual object information matched with the attribute information of the entity picture;
and utilizing the virtual object information to control AR equipment to present an AR effect corresponding to the virtual object information on a first area where the entity picture is located and/or a second area associated with the second entity picture.
The specific processing procedure executed by the processor 11 may refer to the description in the above method embodiment or apparatus embodiment, and is not further described here.
Furthermore, the embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the presentation method of the AR effect described in the above method embodiments.
The computer program product of the presentation method for AR effects provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the augmented reality data presentation method described in the above method embodiments, which may be referred to in the above method embodiments specifically, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method for presenting an Augmented Reality (AR) effect, comprising:
acquiring a real scene image;
determining attribute information of an entity picture displayed in a real scene based on the real scene image;
acquiring virtual object information matched with the attribute information of the entity picture;
and utilizing the virtual object information to control AR equipment to present the AR effect corresponding to the virtual object information in the associated area corresponding to the entity picture.
2. The method according to claim 1, wherein the determining attribute information of the physical drawing displayed in the real scene based on the real scene image comprises:
matching the real scene image with a reference image in a preset picture library, and determining a target reference image matched with the entity picture in the real scene;
and determining the attribute information corresponding to the target reference image as the attribute information of the entity picture.
3. The method according to claim 2, wherein the predetermined drawing library comprises at least one set of reference images, each set of reference images comprising at least one reference image representing a same physical drawing.
4. The method according to claim 1, wherein the determining attribute information of the physical drawing displayed in the real scene based on the real scene image comprises:
determining an image area where the entity picture in the real scene image is located by utilizing a pre-trained target detection model and the real scene image;
and identifying the image area where the entity picture is located, or identifying the image area where the entity picture is located by utilizing a pre-trained attribute detection model to obtain attribute information of the entity picture.
5. The method according to any one of claims 1 to 4, wherein the attribute information of the entity picture comprises at least one of the following information: the style of the picture; picture tone; picture size; content depicted by a picture; and (5) identifying the picture.
6. The method according to any one of claims 1 to 5, wherein said obtaining virtual object information matching with attribute information of the physical drawing comprises:
acquiring virtual object information matched with the attribute information of the entity picture from a preset virtual object library; the virtual object library includes virtual object information corresponding to the at least one attribute information.
7. The method according to any one of claims 1 to 6, wherein the associated area corresponding to the physical drawing is a first area where the physical drawing is located and/or a second area having a preset relative position relationship with the physical drawing.
8. A presentation apparatus for augmented reality AR effects, comprising:
the first acquisition module is used for acquiring a real scene image;
the determining module is used for determining attribute information of the entity picture displayed in the real scene based on the real scene image;
the second acquisition module is used for acquiring virtual object information matched with the attribute information of the entity picture;
and the presentation module is used for controlling AR equipment to present the AR effect corresponding to the virtual object information on the first area where the entity picture is located and/or the second area associated with the second entity picture by utilizing the virtual object information.
9. An electronic device, comprising: a processor, a memory storing machine readable instructions executable by the processor, the processor for executing the machine readable instructions stored in the memory, the processor performing the steps of the method of presenting an AR effect according to any one of claims 1 to 7 when the machine readable instructions are executed by the processor.
10. A computer-readable storage medium, having stored thereon a computer program, which, when executed by an electronic device, performs the steps of the method of presenting AR effects of any one of claims 1 to 7.
CN202010488809.9A 2020-06-02 2020-06-02 AR effect presentation method and apparatus, electronic device and storage medium Pending CN111640190A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010488809.9A CN111640190A (en) 2020-06-02 2020-06-02 AR effect presentation method and apparatus, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010488809.9A CN111640190A (en) 2020-06-02 2020-06-02 AR effect presentation method and apparatus, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN111640190A true CN111640190A (en) 2020-09-08

Family

ID=72331160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010488809.9A Pending CN111640190A (en) 2020-06-02 2020-06-02 AR effect presentation method and apparatus, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN111640190A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486205A (en) * 2021-07-06 2021-10-08 北京林业大学 Plant science popularization information system based on augmented virtual reality technology
WO2024001560A1 (en) * 2022-06-27 2024-01-04 中兴通讯股份有限公司 Augmented reality-based drawing teaching method and system, display terminal, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106649539A (en) * 2016-11-02 2017-05-10 深圳市幻实科技有限公司 Method and device for playing augmented reality videos
CN108550190A (en) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 Augmented reality data processing method, device, computer equipment and storage medium
CN109078327A (en) * 2018-08-28 2018-12-25 百度在线网络技术(北京)有限公司 Game implementation method and equipment based on AR
CN109741462A (en) * 2018-12-29 2019-05-10 广州欧科信息技术股份有限公司 Showpiece based on AR leads reward device, method and storage medium
CN110286773A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Information providing method, device, equipment and storage medium based on augmented reality
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106649539A (en) * 2016-11-02 2017-05-10 深圳市幻实科技有限公司 Method and device for playing augmented reality videos
CN108550190A (en) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 Augmented reality data processing method, device, computer equipment and storage medium
CN109078327A (en) * 2018-08-28 2018-12-25 百度在线网络技术(北京)有限公司 Game implementation method and equipment based on AR
CN109741462A (en) * 2018-12-29 2019-05-10 广州欧科信息技术股份有限公司 Showpiece based on AR leads reward device, method and storage medium
CN110286773A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Information providing method, device, equipment and storage medium based on augmented reality
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486205A (en) * 2021-07-06 2021-10-08 北京林业大学 Plant science popularization information system based on augmented virtual reality technology
CN113486205B (en) * 2021-07-06 2023-07-25 北京林业大学 Plant science popularization information system based on augmented virtual reality technology
WO2024001560A1 (en) * 2022-06-27 2024-01-04 中兴通讯股份有限公司 Augmented reality-based drawing teaching method and system, display terminal, and storage medium

Similar Documents

Publication Publication Date Title
KR102417645B1 (en) AR scene image processing method, device, electronic device and storage medium
KR102118000B1 (en) Target target display method and device
CN111638796A (en) Virtual object display method and device, computer equipment and storage medium
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN103975365B (en) Methods and systems for capturing and moving 3d models and true-scale metadata of real world objects
CN106355153A (en) Virtual object display method, device and system based on augmented reality
CN107484428B (en) Method for displaying objects
US10186084B2 (en) Image processing to enhance variety of displayable augmented reality objects
WO2016122973A1 (en) Real time texture mapping
JP2019510297A (en) Virtual try-on to the user's true human body model
CN106203286B (en) Augmented reality content acquisition method and device and mobile terminal
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
JP7209474B2 (en) Information processing program, information processing method and information processing system
CN111696215A (en) Image processing method, device and equipment
CN112774183B (en) Appreciation system, model forming apparatus, control method, and recording medium
CN103761758A (en) Travel virtual character photographing method and system
CN111651049B (en) Interaction method, device, computer equipment and storage medium
CN111640190A (en) AR effect presentation method and apparatus, electronic device and storage medium
CN108961375A (en) A kind of method and device generating 3-D image according to two dimensional image
CN111862340A (en) Augmented reality data presentation method and device, display equipment and storage medium
CN109074680A (en) Realtime graphic and signal processing method and system in augmented reality based on communication
CN111639613A (en) Augmented reality AR special effect generation method and device and electronic equipment
JP2013037533A (en) Product information acquisition system and product information provision server device
CN110267079B (en) Method and device for replacing human face in video to be played
RU2735066C1 (en) Method for displaying augmented reality wide-format object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200908