WO2021250540A1 - Method for displaying augmented reality multimedia contents - Google Patents

Method for displaying augmented reality multimedia contents Download PDF

Info

Publication number
WO2021250540A1
WO2021250540A1 PCT/IB2021/054969 IB2021054969W WO2021250540A1 WO 2021250540 A1 WO2021250540 A1 WO 2021250540A1 IB 2021054969 W IB2021054969 W IB 2021054969W WO 2021250540 A1 WO2021250540 A1 WO 2021250540A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual image
reference object
data
identification code
user
Prior art date
Application number
PCT/IB2021/054969
Other languages
French (fr)
Inventor
Andrea Bortolotti
Original Assignee
The B3Ring Company S.R.L.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The B3Ring Company S.R.L. filed Critical The B3Ring Company S.R.L.
Publication of WO2021250540A1 publication Critical patent/WO2021250540A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/22Character recognition characterised by the type of writing
    • G06V30/224Character recognition characterised by the type of writing of printed characters having additional code marks or containing code marks

Definitions

  • This invention relates to a method for displaying multimedia contents with augmented reality technology.
  • augmented reality technology allows three-dimensional multimedia content to be shown on a photo of a real subject.
  • the method involves capturing image data, representing a real image of an object, using a camera of an electronic device.
  • the image data are sent to a processor.
  • the processor receives virtual image data, representing three-dimensional objects to be displayed on the real image.
  • the processor processes the image data and the virtual image data to generate derived image data (preferably three-dimensional), which are then displayed on the electronic device in the form of an augmented reality image.
  • This invention has for an aim to provide a method and an electronic device for displaying multimedia contents with augmented reality technology to overcome the above mentioned disadvantages of the prior art.
  • this disclosure provides a method for displaying multimedia contents with augmented reality technology, through an electronic device including a processor and having access to a memory.
  • the method comprises a step of receiving real image data, representing an image of a reference object, captured by a camera (or optical device) of the electronic device.
  • the method comprises a step of retrieving one or more virtual image datasets from the memory.
  • the virtual image data represent corresponding three-dimensional (or even two-dimensional) images of objects.
  • the method comprises a step of generating derived image data based on the real image data and the virtual image datasets retrieved.
  • the method comprises a step of sending the derived image data to a display of the electronic device to display an augmented reality image including the reference object.
  • the method comprises a step of reading an identification code, uniquely associated with the reference object captured by the camera.
  • the reference object is provided with an identification element (or tag) containing information representing the identification code.
  • the identification element may be a barcode (readable by a scanner or a camera) or a transmitter (active element) or a memory containing reference data (passive element).
  • the electronic device can read the reference code using different technologies, depending also on the type of identification element used; for example, reading might occur by capturing an image or by exchanging data using NFC or RFID technology.
  • the processor retrieves the one or more virtual image datasets from the memory based on the identification code it has received.
  • This feature allows discriminating the virtual image data based on the object captured, thus varying the augmented reality image based on the object captured.
  • the method comprises a step of registering, in which the processor receives registration data and a registration request from a user and responds by saving the registration data to the memory.
  • the registration data are uniquely associated with a user.
  • the registration data are uniquely associated with the identification code of a reference object associated with the user.
  • the method comprises a step of logging in, in which a user who wishes to use the electronic device is logged in or denied access.
  • the processor receives access data from a user.
  • the access data identify the user and may include a username, an email address, a password or other personal information connected with the user.
  • the processor checks that the access data match the registration data saved in the memory. Matching between the access data and the registration data is proof that the user is effectively registered.
  • the processor retrieves from the memory the identification code of the reference object the user is associated with.
  • the processor is in possession of an identification code of the reference object framed and an identification code uniquely associated with the logged user.
  • the method comprises a step of comparing identification codes, where the processor compares the identification code received from the reference object with the identification code retrieved from the memory in response to receiving the access data, to verify that the reference object is associated with the user logged into the electronic device.
  • the processor retrieves the virtual image data based on the comparison between the identification codes. This allows displaying some three-dimensional objects in the form of augmented reality only if the identification code received from the object is the same code associated with the logged user, that is to say, only if the image captured by the user is the image of an object which belongs to that user. Otherwise, if the object captured does not belong to that user (identification code received from the object different from that associated with the logged user), the processor filters the objects to be displayed in the form of augmented reality.
  • the method comprises a step of receiving visibility data associated with a corresponding virtual image dataset.
  • the visibility data represent registration data corresponding to users who are authorized to display the corresponding three-dimensional image of the object.
  • the processor when it receives the identification code of the object, it retrieves the registration data associated with the identification code and checks, for each virtual image dataset, that the visibility data include those data. If they do, the processor retrieves the virtual image dataset; if they do not, the processor prevents them from being displayed. In an embodiment, for each virtual image dataset, the processor checks that the access data of the logged user match the visibility data and, based on this check, enables or inhibits the visibility of the corresponding virtual image dataset.
  • the visibility data are associated with groups of virtual image datasets, including at least two virtual image datasets and representing a plurality of three-dimensional images. This feature allows creating groups of three-dimensional images which have the same displaying permissions, that is to say, which are displayable by a group of people, for example, a specific group of friends.
  • the visibility data (which include, inter alia, user credentials) are associated with one or more identification codes, corresponding to one or more reference objects, which include the reference object owned by the authenticated user and also include other reference objects of other users.
  • the visibility data includes, for each identification code to which they are associated, a respective group of multimedia data, representative of augmented reality objects visible on the corresponding reference object. Therefore, in one embodiment, a user, via a user interface on the mobile device, sets, for each user (i.e. for each set of registration data identifying a user), which multimedia contents are visible by said user on his own device.
  • the visibility data includes the following fields:
  • - device data i.e. a specific identification code
  • a specific identification code representative of a specific reference object.
  • These three fields identify what content a specific user can view on a specific reference object.
  • the field relating to multimedia data visible by the specific user can only be set by the owner of the reference object identified by the device data.
  • This embodiment advantageously allows you to show your contacts different contents depending on the persons and the relationship with these persons.
  • each multimedia content is associated with positioning data, representative of a position (or an order) with which the multimedia content can be viewed on the reference object (or on the bracelet). This is very important as the order of the objects on the reference object greatly influences the message you want to convey through the contents. This aspect is even more important if you want to send a real holographic message.
  • the method provides for a step of sending positioning data, associated with a respective multimedia content.
  • This step of sending preferably, is carried out by positioning (selection, drag and drop) of the multimedia content on a specific position of the bracelet, in a user interface provided on the screen of an electronic device (i.e. a smartphone).
  • the method provides a step of sending multimedia data.
  • a first user sends multimedia data to a second user, who can view said multimedia data on his own bracelet.
  • a second user who can view said multimedia data on his own bracelet.
  • the memory (remote or local) comprises a plurality of three-dimensional models that define a new type of alphabet.
  • This alphabet is three-dimensional and configured to define a modern and fast language, suitable for users.
  • the message remains viewable for a predetermined time interval (for example for 5 minutes) from the moment of the first visualization by the receiving user. At the expiration of the predetermined time interval, the message is then permanently deleted from the memory. This allows to maintain maximum privacy for users.
  • the holographic message sent (i.e. the multimedia content sent) is associated with a hypertext link, for the detailed view and / or purchase of the object sent.
  • the user can receive a message in which he can view, in augmented reality, the object, for example, a fashion object. If the object is to your liking, with a simple click, you can buy the product directly.
  • the method involves a video making step.
  • the processor receives video data, representative of a video made with the electronic device and including at least one animated subject.
  • the processor recognizes the movements of the person in the video.
  • the processor generates artificial video data, based on the video data.
  • the artificial video data represents a digital avatar that reproduces the same movements of the person detected with the camera of the electronic device.
  • the method provides for a step of sending said artificial video data from a first user to a second user.
  • the method comprises a step of receiving an additional identification code, uniquely associated with an additional reference object captured by the camera.
  • the processor retrieves the one or more virtual image datasets from the memory based on the identification code and the additional identification code it has received.
  • the method comprises capturing two reference objects simultaneously, each having a respective identification code.
  • the processor receives both identification codes and, based on the combination of identification codes, retrieves the one or more virtual image datasets from the memory. This allows, for example, capturing the objects of two persons connected by a certain relationship, hence retrieving from the memory one or more virtual image datasets representing objects that are shared by both persons.
  • At least one virtual image dataset represents an animated three-dimensional image.
  • the content derived from the virtual image data is animated relative to the real image captured.
  • the processor in the step of generating the three-dimensional image data, is configured to generate the three-dimensional image based on the image data and/or the one or more virtual image datasets and/or the registration data, to display properties of the user logged into the electronic device.
  • the method also comprises other steps which are not necessarily performed by the electronic device.
  • the method comprises a step of preparing a reference object and an electronic device.
  • the method comprises a step of saving one or more virtual image datasets to a memory, each representing a corresponding three- dimensional image of an object.
  • the method comprises a step of capturing real image data, representing an image of the reference object, using a camera of the electronic device.
  • the method comprises a step of making an augmented reality image including the reference object available on the display of the electronic device, based on the three-dimensional image data.
  • this disclosure provides a portable electronic device including a processor which is programmed to perform one or more steps of the method described in this disclosure.
  • This disclosure also provides a computer program comprising instructions to perform one or more steps of the method described in this disclosure, when run on the electronic device according to this disclosure.
  • this disclosure provides a computer server comprising a database.
  • the database comprises one or more virtual image datasets, each representing a corresponding three-dimensional image of an object.
  • the database comprises one or more registration datasets, each representing a user.
  • the database comprises one or more identification codes, each associated with a respective real reference object.
  • each virtual image dataset in the database is associated with at least one of the registration datasets so as to associate each virtual image dataset with at least one user.
  • each identification code in the database is associated with a corresponding registration dataset so as to associate each real reference object with a corresponding user.
  • the database comprises one or more visibility datasets representing one or more registration datasets that identify one or more users.
  • each visibility dataset in the database is associated with at least one of the virtual image datasets so as to associate each virtual image dataset with at least one user who has permission to display it.
  • this disclosure provides a system for displaying multimedia contents with augmented reality technology.
  • the system comprises a reference object.
  • the system comprises a memory containing virtual image datasets, each representing a corresponding three-dimensional image of an object.
  • the system comprises an electronic device.
  • the electronic device comprises a camera, configured to capture real image data, representing an image of the reference object.
  • the electronic device comprises a processor, configured to receive the real image data from the camera.
  • the processor is configured to retrieve one or more of the virtual image datasets from the memory.
  • the processor is configured to generate three-dimensional image data based on the real image data and the virtual image datasets retrieved.
  • the electronic device comprises a display, configured to display an augmented reality image including the reference object, based on the three-dimensional image data.
  • the reference object comprises a transmitter.
  • the transmitter is configured to send an identification code, uniquely associated with the reference object.
  • the electronic device comprises a receiver, configured to receive the identification code from the reference object.
  • the device is programmed to query the memory, through the processor, based on the identification code, to selectively retrieve virtual image datasets uniquely associated with the reference object through the identification code.
  • the memory is a remote database located on a remote server.
  • the transmitter of the reference object is a wireless transmitter, where transmission is by radio waves, for example, Bluetooth, UWB or other technologies.
  • the transmitter of the reference object is a near field communication (NFC) transmitter configured to transmit the identification code to the electronic device wirelessly.
  • NFC near field communication
  • the system comprises an additional reference object, including a respective transmitter, configured to send a corresponding identification code, uniquely associated with the additional reference object.
  • the processor is configured to query the memory based on the identification code received from the reference object and based on the identification code received from the additional reference object, so as to selectively retrieve specific virtual image datasets.
  • the reference object and the additional reference object are configured to dialogue with each other, transmitting one or more virtual image datasets and/or the respective identification codes to and from each other.
  • FIG. 1 shows a system for displaying multimedia contents with augmented reality technology
  • FIG. 1 shows an embodiment of the system of Figure 1 ;
  • Figure 3 shows an embodiment of the system of Figure 1 ;
  • Figure 4 shows an embodiment of the system of Figure 1 ;
  • Figure 5 shows an embodiment of the system of Figure 1 .
  • the numeral 1 denotes a system for displaying multimedia contents with augmented reality technology.
  • the system 1 comprises an electronic device 10, which may be a smartphone, a tablet, a personal computer or other device.
  • the electronic device 10 comprises a camera 11 , configured to capture image data.
  • the electronic device 10 comprises a display 12, on which images or graphical information in general can be displayed.
  • the electronic device 10 comprises a communication element 13 for communicating with remote servers or external objects through a wireless connection. More specifically, the communication element can use a Bluetooth, wireless, UWB or NFC (near field communication) connection.
  • the device 10 comprises a processor, configured to execute a set of instructions following suitable programming of the device 10.
  • the system 1 comprises a reference object 20.
  • the reference object 20 is preferably a wristband but it could be any other object.
  • the reference object 20 comprises an identification element 21.
  • the identification element (or tag) may be a transmitter but it might also be of a different kind (for example, a barcode or a decoration created by steganography). For convenience, reference will hereinafter be made to a transmitter 21 without thereby limiting the scope of the concept of identification element.
  • the transmitter 21 may be configured to send data to the electronic device 10. More specifically, the transmitter 21 is configured to send to the electronic device an identification code 201 which is uniquely correlated with the wristband 20.
  • the transmitter can use a Bluetooth, wireless, UWB or NFC (near field communication) connection.
  • the reference object may also comprise a receiver (also operating through Bluetooth, wireless, UWB or NFC) to receive data from the electronic device 10 or from another wristband 20’.
  • the device 10 comprises a user interface to send one or more inputs to the processor.
  • the user interface may be embodied in a touch screen, a keyboard, a mouse, a voice command system or any other means that can allow interaction between the electronic device and the user.
  • the system 1 comprises a memory 30, which is preferably a remote memory residing on a remote server.
  • the processor comprises instructions for executing a plurality of steps that define a method for displaying digital contents in the form of augmented reality.
  • a user who wishes to use the electronic device 10 to display an augmented reality image starts by registering with the electronic device.
  • the processor receives registration data from a user, who enters the data from the user interface.
  • the processor receives the registration data and saves them to the remote memory 30.
  • the remote memory (the remote database) therefore comprises a user table where, for each user, there is a record containing the registration data entered by the user.
  • the registration data entered by the user also includes an identification code, associated with a wristband 20 belonging to the user.
  • an identification code associated with a wristband 20 belonging to the user.
  • a specific wristband 20 is associated with each user.
  • the method comprises a step of logging in.
  • the user enters access data through the user interface.
  • the processor receives the access data.
  • the processor searches the plurality of records in the user table to verify whether the access data entered correspond to those of a registered user.
  • the processor allows or denies access to the electronic device 10.
  • the electronic device 10 can be used without logging in.
  • the camera 11 of the device 10 is configured to capture image data corresponding to a real image of the wristband 20.
  • the communication element 13 is also configured to receive an identification code CD1 from the transmitter 21 of the wristband 20.
  • the processor has access to the real image data captured, to the identification code CD1 of the wristband 20 and to the registration data of the logged user.
  • the processor is programmed to send a request for data 401 to the remote memory 30 to request one or more virtual image datasets 402.
  • the request for data 401 is made on the basis of the identification code CD1 and/or on the basis of the registration data of the logged user and/or on the basis of the real image data captured.
  • the request for data 401 is programmed for retrieving one or more virtual image datasets 402 which are associated with the unique code CD1.
  • the one or more virtual image datasets 402 are associated with at least one corresponding unique code, that is, with at least one wristband. Therefore, by this request for data 401 , the processor is configured to retrieve one or more virtual image datasets 402 associated with the wristband whose identification code CD1 is that of the wristband 20 captured.
  • the processor is then configured to process the image data captured by the camera 11 and the one or more virtual image datasets 402 to generate three-dimensional image data, representing a three-dimensional image 403 shown on the display 12 of the device 10.
  • the three-dimensional image 403 may include three-dimensional objects, animated objects 403’, two-dimensional objects, wristband decorations, objects that identify the user’s mood.
  • the processor is configured to process the real image data captured by the camera 11 and the registration data of the logged user to show an augmented reality image in which user information 405 is also shown on the display 12.
  • the processor is also configured to filter the visibility of the one or more virtual image datasets 402 based on the registration data.
  • the one or more virtual image datasets 402 in the remote memory 30 are (also) associated with visibility data 404, representing one or more users (that is, with one or more registration data records) who are authorized to display them.
  • the processor is configured to check, for each of the virtual image datasets 402, that the corresponding visibility data 404 include the registration data of the logged user. In the absence of this requisite, the virtual image data 402 are not shown on the display of the electronic device 10.
  • the user has purchased (downloaded, saved to the remote memory 30) a first virtual image dataset, corresponding to a three-dimensional star, and a second virtual image dataset, corresponding to a three-dimensional heart.
  • the first virtual image dataset (star) and the second virtual image dataset (heart) are each associated with the unique code CD1 of the user’s wristband 20. Therefore, after the user has logged in and after the user has captured the wristband 20 with the camera, the processor retrieves the virtual image data associated with the unique code CD1 ; that is to say, it retrieves the first virtual image dataset (star) and the second virtual image dataset (heart). In addition to these, the processor also retrieves the registration data of the user associated with the wristband 20 having the identification code CD1 .
  • the processor shows both of the virtual image datasets (star and heart) on the wristband 20.
  • the processor checks whether or not the visibility data 404 of the first and second virtual image datasets 402 include the registration data of the logged user. If no such match is found, the virtual image data 402 are not shown on the display 12 of the device 10.
  • the system 1 comprises an additional wristband 20’ (additional reference object), including a corresponding transmitter 21 ’ and uniquely associated with a corresponding identification code CD2. Transmission of data between the additional wristband 20’ and the electronic device 10 is substantially the same as that described for the wristband 20.
  • the processor retrieves the virtual image data 402 based on both the identification code CD1 and the additional identification code CD2.
  • some virtual image data sets 402 in the remote memory 30 might be associated with a pair of identification codes CD1 , CD2, so that such virtual image data 402 are retrieved only if the processor receives both the identification code CD1 and the identification code CD2.
  • the method performed by the system comprises a step of grouping users and a step of grouping image datasets.
  • a user sends commands to the processor to group together one or more of the users registered in the local memory, according to a grouping logic decided by the user.
  • the user might group together one or more other users belonging to the user’s family or group of friends.
  • a user sends commands to the processor to group together one or more of the image datasets 402 registered in the local memory, according to a grouping logic decided by the user.
  • the user might group together one or more image datasets 402 having emotional features in common, for example, representing common feelings such as love, friendship, respect, and so on.
  • the user can enter commands to associate one or more groups of virtual image datasets 402 with one or more groups of users. More specifically, the processor enters and saves among the visibility data 404 of one virtual image dataset 402 the registration data of all the users forming part of a certain group of users.
  • the user might group together virtual image datasets representing hearts and flowers for one specific group of users, for example, close friends.
  • the method may comprise being able to perform a step of interaction between virtual image data 402 of a wristband 20 with the virtual image data 402 of an additional wristband 20’.
  • the processor receives a first interaction command from a first user.
  • the processor animates (modifies) the virtual image data shown on the wristband 20 in real time based on the first interaction command.
  • the processor receives a second interaction command from a second user.
  • the processor animates (modifies) the virtual image data shown on the additional wristband 20’ in real time based on the second interaction command.
  • the second user can act on an additional electronic device 10’.
  • the three-dimensional objects which are shown on (associated with) the wristband 20 and the additional wristband 20’ are made to interact. This may be utilized for different applications, for example, to allow two users to play using augmented reality technology.
  • the processor is programmed to receive a first group of interaction data, representative of a command executed by a first user on the display of a first device, and a second group of interaction data, representative of a command executed by a second user on the display of a second device (or even on the first device itself).
  • the processor is programmed to generate the virtual image data based on the first interaction data group and the second interaction data group.
  • the first interaction data set is an interaction data set, selected based on the identification code of the first bracelet 20.
  • the second interaction data set is an interaction data set, selected based on the identification code of the additional 20’ bracelet.
  • the types of interactions that can occur between two bracelets belonging to different users are:
  • stickers multimedia contents owned by two different users.
  • user A in possession of a particular type of stickers, will see their 3D model interact with the second user's 3D model. For example, a half heart will come alive and join the second user's half heart, forming the word "love”.
  • the animations of the stickers are linked to the relationship between the two users and the bracelets are recognized both by the internal chip that communicates its bond to the other users' bracelets and by the bond that exists between the two users within the application and by the type of interaction that the stickers have;
  • users meeting and approaching their bracelets will be able to start augmented reality games that will bring the bracelets to interact with each other.
  • user A will be able to initiate an air attack by displaying airplanes that will take off from their bracelet by heading to the bracelet of user B.
  • User B will be able to activate countermeasures to reduce the impact of the opponent's attack, to then start his own attack in turn.
  • the goal of the game could be to earn experience points and win challenges with your friends, using the bracelet as a starting point and augmented reality technology as a means of visualizing the game.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A method for displaying multimedia contents with augmented reality technology comprises the following steps, performed by the processor of an electronic device (10): receiving real image data, representing an image of a reference object (20), captured by a camera (11) of the electronic device (10); retrieving one or more virtual image datasets (402) from the memory (30); generating derived image data (403) based on the real image data and the virtual image datasets (402) retrieved; sending the derived image data (403) to a display (12) of the electronic device (10) to display an augmented reality image including the reference object (20). The method comprises a step of receiving an identification code (CD1), uniquely associated with the reference object (20) captured by the camera (11). The processor retrieves the one or more virtual image datasets (402) from the memory (30) based on the identification code (CD1) it has received.

Description

DESCRIPTION
METHOD FOR DISPLAYING AUGMENTED REALITY MULTIMEDIA
CONTENTS
Technical field
This invention relates to a method for displaying multimedia contents with augmented reality technology. Amongst other things, augmented reality technology allows three-dimensional multimedia content to be shown on a photo of a real subject.
Background art
In these solutions, the method involves capturing image data, representing a real image of an object, using a camera of an electronic device. The image data are sent to a processor. In addition to the image data, the processor receives virtual image data, representing three-dimensional objects to be displayed on the real image. The processor processes the image data and the virtual image data to generate derived image data (preferably three-dimensional), which are then displayed on the electronic device in the form of an augmented reality image.
These solutions are not, however, very flexible since the virtual image data received for a real object are always substantially the same, which leaves little room for varying the augmented reality image.
Other disadvantages of these methods regard the controlling of the accessibility to the virtual image data, which are visible to any person using the electronic device.
Other known systems and methods disclose a solution in which the displayed image data are discriminated on the basis of an identification that can be sent by an antenna in the surrounding environment or by an object framed by the camera. Said systems are described, for example, in patent documents US2019019011 and US2020066050.
However, such methods are not prepared to allow interactions between different users, who wish to interact with each other via their devices. Disclosure of the invention
This invention has for an aim to provide a method and an electronic device for displaying multimedia contents with augmented reality technology to overcome the above mentioned disadvantages of the prior art.
This aim is fully achieved by the method and device of this disclosure as characterized in the appended claims.
According to an aspect of it, this disclosure provides a method for displaying multimedia contents with augmented reality technology, through an electronic device including a processor and having access to a memory.
The method comprises a step of receiving real image data, representing an image of a reference object, captured by a camera (or optical device) of the electronic device.
The method comprises a step of retrieving one or more virtual image datasets from the memory. The virtual image data represent corresponding three-dimensional (or even two-dimensional) images of objects.
The method comprises a step of generating derived image data based on the real image data and the virtual image datasets retrieved.
The method comprises a step of sending the derived image data to a display of the electronic device to display an augmented reality image including the reference object.
In an embodiment, the method comprises a step of reading an identification code, uniquely associated with the reference object captured by the camera. The reference object is provided with an identification element (or tag) containing information representing the identification code. For example, the identification element may be a barcode (readable by a scanner or a camera) or a transmitter (active element) or a memory containing reference data (passive element). The electronic device can read the reference code using different technologies, depending also on the type of identification element used; for example, reading might occur by capturing an image or by exchanging data using NFC or RFID technology.
In an embodiment, in the step of retrieving, the processor retrieves the one or more virtual image datasets from the memory based on the identification code it has received.
This feature allows discriminating the virtual image data based on the object captured, thus varying the augmented reality image based on the object captured.
In an embodiment, the method comprises a step of registering, in which the processor receives registration data and a registration request from a user and responds by saving the registration data to the memory. The registration data are uniquely associated with a user. The registration data are uniquely associated with the identification code of a reference object associated with the user.
In an embodiment, the method comprises a step of logging in, in which a user who wishes to use the electronic device is logged in or denied access.
In the step of logging in, the processor receives access data from a user. The access data identify the user and may include a username, an email address, a password or other personal information connected with the user.
In the step of logging in, the processor checks that the access data match the registration data saved in the memory. Matching between the access data and the registration data is proof that the user is effectively registered.
In the step of logging in, the processor retrieves from the memory the identification code of the reference object the user is associated with.
After these steps, therefore, the processor is in possession of an identification code of the reference object framed and an identification code uniquely associated with the logged user. The method comprises a step of comparing identification codes, where the processor compares the identification code received from the reference object with the identification code retrieved from the memory in response to receiving the access data, to verify that the reference object is associated with the user logged into the electronic device. In an embodiment of the method, the processor retrieves the virtual image data based on the comparison between the identification codes. This allows displaying some three-dimensional objects in the form of augmented reality only if the identification code received from the object is the same code associated with the logged user, that is to say, only if the image captured by the user is the image of an object which belongs to that user. Otherwise, if the object captured does not belong to that user (identification code received from the object different from that associated with the logged user), the processor filters the objects to be displayed in the form of augmented reality.
In an embodiment, the method comprises a step of receiving visibility data associated with a corresponding virtual image dataset. The visibility data represent registration data corresponding to users who are authorized to display the corresponding three-dimensional image of the object.
Therefore, when the processor receives the identification code of the object, it retrieves the registration data associated with the identification code and checks, for each virtual image dataset, that the visibility data include those data. If they do, the processor retrieves the virtual image dataset; if they do not, the processor prevents them from being displayed. In an embodiment, for each virtual image dataset, the processor checks that the access data of the logged user match the visibility data and, based on this check, enables or inhibits the visibility of the corresponding virtual image dataset.
In an embodiment, the visibility data are associated with groups of virtual image datasets, including at least two virtual image datasets and representing a plurality of three-dimensional images. This feature allows creating groups of three-dimensional images which have the same displaying permissions, that is to say, which are displayable by a group of people, for example, a specific group of friends.
In one embodiment, the visibility data (which include, inter alia, user credentials) are associated with one or more identification codes, corresponding to one or more reference objects, which include the reference object owned by the authenticated user and also include other reference objects of other users.
The visibility data includes, for each identification code to which they are associated, a respective group of multimedia data, representative of augmented reality objects visible on the corresponding reference object. Therefore, in one embodiment, a user, via a user interface on the mobile device, sets, for each user (i.e. for each set of registration data identifying a user), which multimedia contents are visible by said user on his own device.
To be further clear, the visibility data includes the following fields:
- registration data, representative of information of the authenticated user;
- multimedia data, representative of three-dimensional multimedia contents;
- device data (i.e. a specific identification code), representative of a specific reference object. These three fields identify what content a specific user can view on a specific reference object. The field relating to multimedia data visible by the specific user can only be set by the owner of the reference object identified by the device data.
This embodiment advantageously allows you to show your contacts different contents depending on the persons and the relationship with these persons.
In one embodiment, each multimedia content is associated with positioning data, representative of a position (or an order) with which the multimedia content can be viewed on the reference object (or on the bracelet). This is very important as the order of the objects on the reference object greatly influences the message you want to convey through the contents. This aspect is even more important if you want to send a real holographic message.
Therefore, the method provides for a step of sending positioning data, associated with a respective multimedia content. This step of sending, preferably, is carried out by positioning (selection, drag and drop) of the multimedia content on a specific position of the bracelet, in a user interface provided on the screen of an electronic device (i.e. a smartphone).
In one embodiment, the method provides a step of sending multimedia data. In the phase of sending multimedia data, a first user sends multimedia data to a second user, who can view said multimedia data on his own bracelet. In this way, it is possible to create a holographic chat with three-dimensional objects.
In particular, in one embodiment, the memory (remote or local) comprises a plurality of three-dimensional models that define a new type of alphabet. This alphabet is three-dimensional and configured to define a modern and fast language, suitable for users.
Users will be able to choose and compose their own message (step of choosing multimedia data) and to send it to their friends or groups of friends (step of sending multimedia data). The user who receives the multimedia data of the holographic alphabet is notified by a push notification and, by clicking on it, they will be able to read (or view) the message sent on their bracelet, using augmented reality technology.
This allows you to send three-dimensional messages between users, through the use of a new type of holographic alphabet that can be viewed via the reference device (i.e. the bracelet).
Preferably, the message remains viewable for a predetermined time interval (for example for 5 minutes) from the moment of the first visualization by the receiving user. At the expiration of the predetermined time interval, the message is then permanently deleted from the memory. This allows to maintain maximum privacy for users.
In one embodiment, the holographic message sent (i.e. the multimedia content sent) is associated with a hypertext link, for the detailed view and / or purchase of the object sent. In this way, the user can receive a message in which he can view, in augmented reality, the object, for example, a fashion object. If the object is to your liking, with a simple click, you can buy the product directly.
In one embodiment, the method involves a video making step. In the video making step, the processor receives video data, representative of a video made with the electronic device and including at least one animated subject. The processor recognizes the movements of the person in the video. The processor generates artificial video data, based on the video data. The artificial video data represents a digital avatar that reproduces the same movements of the person detected with the camera of the electronic device. The method provides for a step of sending said artificial video data from a first user to a second user.
In an embodiment, the method comprises a step of receiving an additional identification code, uniquely associated with an additional reference object captured by the camera.
In the step of retrieving, the processor retrieves the one or more virtual image datasets from the memory based on the identification code and the additional identification code it has received.
In other words, the method comprises capturing two reference objects simultaneously, each having a respective identification code. The processor receives both identification codes and, based on the combination of identification codes, retrieves the one or more virtual image datasets from the memory. This allows, for example, capturing the objects of two persons connected by a certain relationship, hence retrieving from the memory one or more virtual image datasets representing objects that are shared by both persons.
In an embodiment, at least one virtual image dataset represents an animated three-dimensional image. In short, in the three-dimensional, augmented reality image, the content derived from the virtual image data is animated relative to the real image captured.
In an embodiment, in the step of generating the three-dimensional image data, the processor is configured to generate the three-dimensional image based on the image data and/or the one or more virtual image datasets and/or the registration data, to display properties of the user logged into the electronic device.
In an embodiment, the method also comprises other steps which are not necessarily performed by the electronic device. For example, the method comprises a step of preparing a reference object and an electronic device. The method comprises a step of saving one or more virtual image datasets to a memory, each representing a corresponding three- dimensional image of an object.
The method comprises a step of capturing real image data, representing an image of the reference object, using a camera of the electronic device. The method comprises a step of making an augmented reality image including the reference object available on the display of the electronic device, based on the three-dimensional image data.
According to an aspect of it, this disclosure provides a portable electronic device including a processor which is programmed to perform one or more steps of the method described in this disclosure.
This disclosure also provides a computer program comprising instructions to perform one or more steps of the method described in this disclosure, when run on the electronic device according to this disclosure.
According to an aspect of it, this disclosure provides a computer server comprising a database.
The database comprises one or more virtual image datasets, each representing a corresponding three-dimensional image of an object.
The database comprises one or more registration datasets, each representing a user. The database comprises one or more identification codes, each associated with a respective real reference object.
In an embodiment, each virtual image dataset in the database is associated with at least one of the registration datasets so as to associate each virtual image dataset with at least one user. In an embodiment, each identification code in the database is associated with a corresponding registration dataset so as to associate each real reference object with a corresponding user.
In an embodiment, the database comprises one or more visibility datasets representing one or more registration datasets that identify one or more users.
In an embodiment, each visibility dataset in the database is associated with at least one of the virtual image datasets so as to associate each virtual image dataset with at least one user who has permission to display it.
According to an aspect of it, this disclosure provides a system for displaying multimedia contents with augmented reality technology.
The system comprises a reference object. The system comprises a memory containing virtual image datasets, each representing a corresponding three-dimensional image of an object.
The system comprises an electronic device. The electronic device comprises a camera, configured to capture real image data, representing an image of the reference object.
The electronic device comprises a processor, configured to receive the real image data from the camera. The processor is configured to retrieve one or more of the virtual image datasets from the memory. The processor is configured to generate three-dimensional image data based on the real image data and the virtual image datasets retrieved.
The electronic device comprises a display, configured to display an augmented reality image including the reference object, based on the three-dimensional image data.
In an embodiment, the reference object comprises a transmitter. The transmitter is configured to send an identification code, uniquely associated with the reference object. The electronic device comprises a receiver, configured to receive the identification code from the reference object.
The device is programmed to query the memory, through the processor, based on the identification code, to selectively retrieve virtual image datasets uniquely associated with the reference object through the identification code.
In an embodiment, the memory is a remote database located on a remote server.
In an embodiment, the transmitter of the reference object is a wireless transmitter, where transmission is by radio waves, for example, Bluetooth, UWB or other technologies. In a preferred embodiment, the transmitter of the reference object is a near field communication (NFC) transmitter configured to transmit the identification code to the electronic device wirelessly.
In an embodiment, the system comprises an additional reference object, including a respective transmitter, configured to send a corresponding identification code, uniquely associated with the additional reference object.
In an embodiment, the processor is configured to query the memory based on the identification code received from the reference object and based on the identification code received from the additional reference object, so as to selectively retrieve specific virtual image datasets.
In an embodiment, the reference object and the additional reference object are configured to dialogue with each other, transmitting one or more virtual image datasets and/or the respective identification codes to and from each other.
Brief description of the drawinqs
These and other features will become more apparent from the following detailed description of a preferred, non-limiting embodiment, with reference to the accompanying drawings, in which:
- Figure 1 shows a system for displaying multimedia contents with augmented reality technology;
- Figure 2 shows an embodiment of the system of Figure 1 ;
- Figure 3 shows an embodiment of the system of Figure 1 ;
- Figure 4 shows an embodiment of the system of Figure 1 ;
- Figure 5 shows an embodiment of the system of Figure 1 .
Detailed description of preferred embodiments of the invention
With reference to the accompanying drawings, the numeral 1 denotes a system for displaying multimedia contents with augmented reality technology.
The system 1 comprises an electronic device 10, which may be a smartphone, a tablet, a personal computer or other device. The electronic device 10 comprises a camera 11 , configured to capture image data. The electronic device 10 comprises a display 12, on which images or graphical information in general can be displayed.
The electronic device 10 comprises a communication element 13 for communicating with remote servers or external objects through a wireless connection. More specifically, the communication element can use a Bluetooth, wireless, UWB or NFC (near field communication) connection. The device 10 comprises a processor, configured to execute a set of instructions following suitable programming of the device 10.
The system 1 comprises a reference object 20. The reference object 20 is preferably a wristband but it could be any other object.
The reference object 20 comprises an identification element 21. The identification element (or tag) may be a transmitter but it might also be of a different kind (for example, a barcode or a decoration created by steganography). For convenience, reference will hereinafter be made to a transmitter 21 without thereby limiting the scope of the concept of identification element. The transmitter 21 may be configured to send data to the electronic device 10. More specifically, the transmitter 21 is configured to send to the electronic device an identification code 201 which is uniquely correlated with the wristband 20.
The transmitter can use a Bluetooth, wireless, UWB or NFC (near field communication) connection.
Besides the transmitter 21 , the reference object may also comprise a receiver (also operating through Bluetooth, wireless, UWB or NFC) to receive data from the electronic device 10 or from another wristband 20’. The device 10 comprises a user interface to send one or more inputs to the processor. The user interface may be embodied in a touch screen, a keyboard, a mouse, a voice command system or any other means that can allow interaction between the electronic device and the user.
The system 1 comprises a memory 30, which is preferably a remote memory residing on a remote server.
The processor comprises instructions for executing a plurality of steps that define a method for displaying digital contents in the form of augmented reality.
More specifically, in an embodiment, a user who wishes to use the electronic device 10 to display an augmented reality image starts by registering with the electronic device.
In the step of registering a user, the processor receives registration data from a user, who enters the data from the user interface.
The processor receives the registration data and saves them to the remote memory 30. The remote memory (the remote database) therefore comprises a user table where, for each user, there is a record containing the registration data entered by the user.
In the step of registering, the registration data entered by the user also includes an identification code, associated with a wristband 20 belonging to the user. Thus, a specific wristband 20 is associated with each user.
If the user is already a registered user, the method comprises a step of logging in.
In the step of logging in, the user enters access data through the user interface. The processor receives the access data. The processor searches the plurality of records in the user table to verify whether the access data entered correspond to those of a registered user.
As a function of this check, the processor allows or denies access to the electronic device 10. In some cases, the electronic device 10 can be used without logging in.
In any case, the camera 11 of the device 10 is configured to capture image data corresponding to a real image of the wristband 20. The communication element 13 is also configured to receive an identification code CD1 from the transmitter 21 of the wristband 20.
Therefore, after receiving this, the processor has access to the real image data captured, to the identification code CD1 of the wristband 20 and to the registration data of the logged user.
The processor is programmed to send a request for data 401 to the remote memory 30 to request one or more virtual image datasets 402. The request for data 401 is made on the basis of the identification code CD1 and/or on the basis of the registration data of the logged user and/or on the basis of the real image data captured.
For example, the request for data 401 is programmed for retrieving one or more virtual image datasets 402 which are associated with the unique code CD1. In effect, the one or more virtual image datasets 402 are associated with at least one corresponding unique code, that is, with at least one wristband. Therefore, by this request for data 401 , the processor is configured to retrieve one or more virtual image datasets 402 associated with the wristband whose identification code CD1 is that of the wristband 20 captured.
The processor is then configured to process the image data captured by the camera 11 and the one or more virtual image datasets 402 to generate three-dimensional image data, representing a three-dimensional image 403 shown on the display 12 of the device 10.
The three-dimensional image 403 may include three-dimensional objects, animated objects 403’, two-dimensional objects, wristband decorations, objects that identify the user’s mood.
In an embodiment, the processor is configured to process the real image data captured by the camera 11 and the registration data of the logged user to show an augmented reality image in which user information 405 is also shown on the display 12.
The processor is also configured to filter the visibility of the one or more virtual image datasets 402 based on the registration data.
In effect, the one or more virtual image datasets 402 in the remote memory 30 are (also) associated with visibility data 404, representing one or more users (that is, with one or more registration data records) who are authorized to display them.
This allows the user who owns the wristband 20 to discriminate the visibility of the virtual image data 402 based on the user who captures an image of that wristband 20.
In essence, after retrieving the virtual image datasets 402 associated with the wristband 20, the processor is configured to check, for each of the virtual image datasets 402, that the corresponding visibility data 404 include the registration data of the logged user. In the absence of this requisite, the virtual image data 402 are not shown on the display of the electronic device 10.
Described below, to further clarify these features, is a very specific, non limiting example provided purely for the purpose of illustrating the solution described in this disclosure.
The user has purchased (downloaded, saved to the remote memory 30) a first virtual image dataset, corresponding to a three-dimensional star, and a second virtual image dataset, corresponding to a three-dimensional heart. The first virtual image dataset (star) and the second virtual image dataset (heart) are each associated with the unique code CD1 of the user’s wristband 20. Therefore, after the user has logged in and after the user has captured the wristband 20 with the camera, the processor retrieves the virtual image data associated with the unique code CD1 ; that is to say, it retrieves the first virtual image dataset (star) and the second virtual image dataset (heart). In addition to these, the processor also retrieves the registration data of the user associated with the wristband 20 having the identification code CD1 .
If the registration data retrieved match the registration data of the logged user, the processor shows both of the virtual image datasets (star and heart) on the wristband 20.
If the registration data retrieved do not match the registration data of the logged user, on the other hand, the processor checks whether or not the visibility data 404 of the first and second virtual image datasets 402 include the registration data of the logged user. If no such match is found, the virtual image data 402 are not shown on the display 12 of the device 10.
In an embodiment, the system 1 comprises an additional wristband 20’ (additional reference object), including a corresponding transmitter 21 ’ and uniquely associated with a corresponding identification code CD2. Transmission of data between the additional wristband 20’ and the electronic device 10 is substantially the same as that described for the wristband 20.
In any case, according to the method, if the wristband 20 and the additional wristband 20’ are captured simultaneously by the camera 11 , the processor retrieves the virtual image data 402 based on both the identification code CD1 and the additional identification code CD2.
In other words, some virtual image data sets 402 in the remote memory 30 might be associated with a pair of identification codes CD1 , CD2, so that such virtual image data 402 are retrieved only if the processor receives both the identification code CD1 and the identification code CD2.
In an embodiment, the method performed by the system comprises a step of grouping users and a step of grouping image datasets.
In the step of grouping users, a user sends commands to the processor to group together one or more of the users registered in the local memory, according to a grouping logic decided by the user.
By way of non-limiting example, the user might group together one or more other users belonging to the user’s family or group of friends.
In the step of grouping image datasets 402, a user sends commands to the processor to group together one or more of the image datasets 402 registered in the local memory, according to a grouping logic decided by the user.
By way of non-limiting example, the user might group together one or more image datasets 402 having emotional features in common, for example, representing common feelings such as love, friendship, respect, and so on.
Lastly, the user can enter commands to associate one or more groups of virtual image datasets 402 with one or more groups of users. More specifically, the processor enters and saves among the visibility data 404 of one virtual image dataset 402 the registration data of all the users forming part of a certain group of users.
For example, to further clarify this aspect, the user might group together virtual image datasets representing hearts and flowers for one specific group of users, for example, close friends.
According to another aspect of this disclosure, the method may comprise being able to perform a step of interaction between virtual image data 402 of a wristband 20 with the virtual image data 402 of an additional wristband 20’.
In this embodiment, the processor receives a first interaction command from a first user. The processor animates (modifies) the virtual image data shown on the wristband 20 in real time based on the first interaction command.
In this embodiment, the processor receives a second interaction command from a second user. The processor animates (modifies) the virtual image data shown on the additional wristband 20’ in real time based on the second interaction command.
In other embodiments, the second user can act on an additional electronic device 10’.
In the step of interaction, the three-dimensional objects which are shown on (associated with) the wristband 20 and the additional wristband 20’ are made to interact. This may be utilized for different applications, for example, to allow two users to play using augmented reality technology.
In the interaction phase, the processor is programmed to receive a first group of interaction data, representative of a command executed by a first user on the display of a first device, and a second group of interaction data, representative of a command executed by a second user on the display of a second device (or even on the first device itself). The processor is programmed to generate the virtual image data based on the first interaction data group and the second interaction data group.
In one embodiment, the first interaction data set is an interaction data set, selected based on the identification code of the first bracelet 20. Furthermore, the second interaction data set is an interaction data set, selected based on the identification code of the additional 20’ bracelet.
For example, the types of interactions that can occur between two bracelets belonging to different users are:
- start animations linked between multimedia contents (hereinafter stickers) owned by two different users. In this case, user A, in possession of a particular type of stickers, will see their 3D model interact with the second user's 3D model. For example, a half heart will come alive and join the second user's half heart, forming the word "love". In this case, the animations of the stickers are linked to the relationship between the two users and the bracelets are recognized both by the internal chip that communicates its bond to the other users' bracelets and by the bond that exists between the two users within the application and by the type of interaction that the stickers have;
- more advanced interactions in the gaming field: in this case users meeting and approaching their bracelets will be able to start augmented reality games that will bring the bracelets to interact with each other. For example, user A will be able to initiate an air attack by displaying airplanes that will take off from their bracelet by heading to the bracelet of user B. User B will be able to activate countermeasures to reduce the impact of the opponent's attack, to then start his own attack in turn. The goal of the game could be to earn experience points and win challenges with your friends, using the bracelet as a starting point and augmented reality technology as a means of visualizing the game.

Claims

1. A method for displaying multimedia contents with augmented reality technology, through an electronic device (10) including a processor and having access to a memory (30), the method comprising the following steps, performed by the processor:
- receiving real image data, representing an image of a reference object (20), captured by a camera (11 ) of the electronic device (10);
- retrieving one or more virtual image datasets (402) from the memory (30);
- generating derived image data (403) based on the real image data and the virtual image datasets (402) retrieved;
- sending the derived image data (403) to a display (12) of the electronic device (10) to display an augmented reality image including the reference object (20), the method being characterized in that it comprises a step of receiving an identification code (CD1), uniquely associated with the reference object (20) captured by the camera (11), wherein, in the step of retrieving, the processor retrieves the one or more virtual image datasets (402) from the memory (30) based on the identification code (CD1) it has received.
2. The method according to claim 1 , comprising a step of logging in, in which the processor receives access data from a user, checks that the access data match the registration data saved in the memory (30), which are uniquely associated with a user and with the identification code (CD1 ) of a reference object (20) associated with the user, and retrieves from the memory (30) the identification code (CD1) of the reference object (20) associated with the user.
3. The method according to claim 2, comprising a step of comparing identification codes, wherein the processor compares the identification code (CD1 ) received from the reference object (20) with the identification code retrieved from the memory (30) in response to receiving the access data, to verify that the reference object (20) is associated with the user logged into the electronic device (10).
4. The method according to claim 3, comprising a step of receiving visibility data (404) associated with a corresponding virtual image dataset (402) representing registration data corresponding to users who are authorized to display the three-dimensional image defined by the virtual image data (402).
5. The method according to claim 4, wherein for each virtual image dataset (402), the processor checks that the access data of the logged user match the visibility data (404) and, based on this check, enables or inhibits the visibility of the corresponding virtual image dataset (402).
6. The method according to claim 4 or 5, wherein the visibility data (404) are associated with groups of virtual image datasets, including at least two virtual image datasets (402) and representing a plurality of three- dimensional images.
7. The method according to any one of the preceding claims, comprising a step of receiving an additional identification code (CD2), uniquely associated with an additional reference object (20’) captured by the camera (11), and wherein in the step of retrieving, the processor retrieves from the memory (30) the one or more virtual image datasets (402) based on the identification code (CD1 ) and the additional identification code (CD2).
8. The method according to any one of the preceding claims, wherein at least one virtual image dataset (402) represents an animated or animatable, three-dimensional image (403’).
9. The method according to any one of the preceding claims, comprising the following steps:
- preparing the reference object (20) and the electronic device (10);
- saving to the memory (30) the one or more virtual image datasets (402), each representing a corresponding three-dimensional image of an object;
- capturing the real image data through the camera (11) of the electronic device (10); - making an augmented reality image including the reference object (20) available on the display (12) of the electronic device (10), based on the three-dimensional image data (403).
10. Method according to any of the preceding claims, in which the virtual image data (402) of a bracelet (20) interact with the virtual image data (402) of an additional bracelet (20’).
11. A portable electronic device (10) including a processor programmed to perform the steps of the method according to any one of the preceding claims.
12. A computer program comprising instructions to perform the steps of the method according to any one of claims 1 to 10, when run on the electronic device (10) according to claim 11 .
13. A computer server (30) comprising a database, including:
- one or more virtual image datasets (402), each representing a corresponding three-dimensional image of an object;
- one or more registration datasets, each representing a user;
- one or more identification codes (CD1 , CD2), each associated with a respective real reference object (20, 20’), wherein each virtual image dataset (402) is associated with at least one of the registration datasets, to associate each virtual image dataset (402) with at least one user, and wherein each identification code (CD1 , CD2) is associated with a corresponding registration dataset, to associate each real reference object (20, 20’) with a corresponding user.
14. A system (1) for displaying multimedia contents with augmented reality technology, comprising:
- a reference object (20);
- a memory (30) containing virtual image datasets (402), each representing a corresponding three-dimensional image of an object,
- an electronic device (10), including:
- a camera (11), configured to capture real image data, representing an image of the reference object (20); a processor, configured to receive the real image data from the camera (11 ) and to retrieve one or more of the virtual image datasets
(402) from the memory (30) and configured to generate derived image data (403) based on the real image data and on the virtual image datasets (402) retrieved;
- a display (12), configured to display an augmented reality image including the reference object (20), based on the derived image data
(403); characterized in that the reference object comprises an identification element (21), including an identification code (CD1 ), uniquely associated with the reference object (20), and wherein the electronic device (10) is configured to capture the identification code (CD1) from the reference object (20), wherein the device (10) is programmed to query the memory (30), through the processor, based on the identification code (CD1 ), to selectively retrieve virtual image datasets (402), uniquely associated with the reference object (20) through the identification code (CD1).
15. The system (1) according to claim 14, wherein the memory (30) is a remote database located in a remote server.
16. The system (1) according to claim 14 or 15, wherein the identification element of the reference object (20) is a near field communication (NFC) transmitter configured to transmit the identification code (CD1 ) wirelessly to the electronic device (10).
PCT/IB2021/054969 2020-06-08 2021-06-07 Method for displaying augmented reality multimedia contents WO2021250540A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT102020000013612A IT202000013612A1 (en) 2020-06-08 2020-06-08 METHOD FOR VIEWING MULTIMEDIA CONTENT IN AUGMENTED REALITY
IT102020000013612 2020-06-08

Publications (1)

Publication Number Publication Date
WO2021250540A1 true WO2021250540A1 (en) 2021-12-16

Family

ID=72356262

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2021/054969 WO2021250540A1 (en) 2020-06-08 2021-06-07 Method for displaying augmented reality multimedia contents

Country Status (2)

Country Link
IT (1) IT202000013612A1 (en)
WO (1) WO2021250540A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190019011A1 (en) * 2017-07-16 2019-01-17 Tsunami VR, Inc. Systems and methods for identifying real objects in an area of interest for use in identifying virtual content a user is authorized to view using an augmented reality device
US20200066050A1 (en) * 2018-08-24 2020-02-27 Virnect Inc Augmented reality service software as a service based augmented reality operating system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190019011A1 (en) * 2017-07-16 2019-01-17 Tsunami VR, Inc. Systems and methods for identifying real objects in an area of interest for use in identifying virtual content a user is authorized to view using an augmented reality device
US20200066050A1 (en) * 2018-08-24 2020-02-27 Virnect Inc Augmented reality service software as a service based augmented reality operating system

Also Published As

Publication number Publication date
IT202000013612A1 (en) 2021-12-08

Similar Documents

Publication Publication Date Title
KR102530504B1 (en) Generating and displaying customized avatars in media overlays
US11992773B2 (en) Augmented reality experiences based on qualities of interactions
CN106030598B (en) Trust agent authentication method for mobile device
CN113994361A (en) Personalized service operation system and method using intelligent equipment and robot of intelligent mobile equipment
CN101981538B (en) Automated selection of avatar characteristics for groups
CN108108012B (en) Information interaction method and device
CN110050283A (en) The media of the user's control of customization cover
WO2019024853A1 (en) Image processing method and device, and storage medium
US10966076B2 (en) First portable electronic device for facilitating a proximity based interaction with a second portable electronic device
CN115552403B (en) Inviting media overlays for private collections of media content items
CN106716393A (en) Method and apparatus for recognition and matching of objects depicted in images
WO2019150269A1 (en) Method and system for 3d graphical authentication on electronic devices
KR20210143289A (en) 3D avatar plugin for 3rd party games
KR20230113370A (en) face animation compositing
CN105824799B (en) A kind of information processing method, equipment and terminal device
CN117916774A (en) Deforming a custom mesh based on a body mesh
KR20230062857A (en) augmented reality messenger system
CN117940962A (en) Facial expression based control interactive fashion
CN115697508A (en) Game result overlay system
KR20230062875A (en) Augmented Reality Automatic Responses
KR20230019968A (en) message interface extension system
CN107864681A (en) Utilize the social network service system and method for image
CN115516834A (en) Messaging system with carousel of related entities
CN118251692A (en) System and method for providing support service in virtual environment
CN107656959A (en) A kind of message leaving method, device and message equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21730294

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21730294

Country of ref document: EP

Kind code of ref document: A1