CN108092950B - AR or MR social method based on position - Google Patents

AR or MR social method based on position Download PDF

Info

Publication number
CN108092950B
CN108092950B CN201611106337.6A CN201611106337A CN108092950B CN 108092950 B CN108092950 B CN 108092950B CN 201611106337 A CN201611106337 A CN 201611106337A CN 108092950 B CN108092950 B CN 108092950B
Authority
CN
China
Prior art keywords
user
target user
social
virtual image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611106337.6A
Other languages
Chinese (zh)
Other versions
CN108092950A (en
Inventor
金德奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Facebook Technology Co ltd
Original Assignee
Shenzhen Facebook Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Facebook Technology Co ltd filed Critical Shenzhen Facebook Technology Co ltd
Priority to CN201611106337.6A priority Critical patent/CN108092950B/en
Publication of CN108092950A publication Critical patent/CN108092950A/en
Application granted granted Critical
Publication of CN108092950B publication Critical patent/CN108092950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality

Abstract

The invention provides a location-based AR or MR social method, wherein a virtual image of a user is associated with a user identity and is recorded in a system server, and the virtual image is endowed with button attributes and is associated with a social window. And superposing the virtual image of the target user at the position of the target user, and when the user reaches the position area of the target user, acquiring the virtual image of the target user by the user through the system server and superposing and displaying the virtual image on a display interface of the user equipment. The user aims or clicks a target user virtual image in the display interface to activate a social window, inputs social content and sends the social content to the system server. And the system server receives the social content sent by the user and forwards the social content to the target user equipment.

Description

AR or MR social method based on position
Technical Field
The present invention relates to a social method, and more particularly to a location-based AR or MR social method.
Background
Augmented reality (Augmented Reality, AR for short) which applies virtual information to the real world by computer technology, and real environment and virtual objects are superimposed on the same screen or space in real time and exist at the same time. The current augmented reality equipment and technology is represented by Microsoft HoloLens holographic glasses, which can project news information flow, watch video, view weather, assist 3d modeling, assist in simulating login Mars scenes and simulate games. Virtual and real are combined very successfully and better interactivity is achieved. Mixed reality (MR for short), which includes both augmented reality and augmented virtual, refers to a new visual environment created by merging the real and virtual worlds. Physical and digital objects coexist in the new visualization environment and interact in real time. Also called Mediated Reality (MR), VR is a pure virtual digital picture, mixed Reality including AR is a virtual digital picture+naked eye Reality, and MR is a digital reality+virtual digital picture. At present, a company called Yimijia technology is focused on the research and development of the piece, and MR glasses are being researched and developed. A very explosive game Pokemon Go is a real Augmented (AR) pet fostering combat RPG Role playing game (mobile game) developed by the combination of nintendo, pok emon and Google Niantic Labs. Pokemon Go is a game that explores capturing, fighting, and exchanging a fairy that appears in the real world. Players can find the fairy in the real world through the smart phone to capture and fight.
The existing social method mainly comprises QQ, weChat and strange, which are social platforms based on Internet, a user needs to register and then adds friends through QQ numbers, weChat signals and strange numbers, even if WeChat is added by sweeping WeChat two-dimension codes of friends in a real world, the friends can be added by first establishing a face-to-face communication relationship and then further communicating through the platform, and it is impossible for a real stranger to directly establish social relationship, even if he or she can see the WeChat in front of the user, the WeChat can be added by obtaining the WeChat of the other party after face-to-face conversation knowledge. This is a further regretta for beauty that will get over the shoulder and want to establish social relationships. Thus, rapid establishment of social relationships between strangers in the visible or close range is a current urgent need to be addressed.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a position-based AR or MR social method, which overcomes the problems existing in the prior art. The technical scheme is as follows:
a location-based AR or MR social method, characterized by:
the virtual image of the user is associated with the identity of the user and is recorded in a system server;
assigning the virtual image to a button attribute and associating with a social window;
superposing the virtual image of the target user at the position of the target user, and when the user reaches the position area of the target user, acquiring the virtual image of the target user by the user through a system server and superposing and displaying the virtual image on a display interface of user equipment;
the user aims or clicks a target user virtual image in the display interface to activate a social window, inputs social content and sends the social content to a system server;
the system server receives social content sent by a user and forwards the social content to target user equipment;
the target user equipment receives social content sent by a user and displays the social content on a social window of the equipment display interface;
the target user inputs the replied social content through the social window and sends the replied social content to the system server;
and the system server receives the reply social content sent by the target user and forwards the reply social content to a social window of a display interface of the user equipment for display.
The invention provides a location-based AR or MR social method, which overcomes the problems existing in the prior art. The virtual image of the user is associated with the user identity and is filed in the system server, and the virtual image is endowed with button attributes and is associated with the social window. And superposing the virtual image of the target user at the position of the target user, and when the user reaches the position area of the target user, acquiring the virtual image of the target user by the user through the system server and superposing and displaying the virtual image on a display interface of the user equipment. The user aims or clicks a target user virtual image in the display interface to activate a social window, inputs social content and sends the social content to the system server. And the system server receives the social content sent by the user and forwards the social content to the target user equipment. And the target user equipment receives the social content sent by the user and displays the social content on a social window of the equipment display interface. The target user inputs the replied social content through the social window and sends the replied social content to the system server. And the system server receives the reply social content sent by the target user and forwards the reply social content to a social window of a display interface of the user equipment for display. The user captures the body image of the target user in the real scene through the camera unit, performs image recognition on the body image, combines the relative positions of the user and the target user in the system position diagram, matches the body image of the target user with the virtual image of the target user, directs the virtual image to the corresponding body image or directly superimposes the virtual image on the corresponding body image, thereby facilitating the user to recognize the target user and facilitating the communication between the users. According to the invention, the user equipment such as the mobile phone can establish a social relationship through the AR for temporary conversation without adding friends, and the user equipment and the mobile phone are added as friends after having good feeling, so that the mobile phone with the AR has a simple structure in a visual range, is convenient to use and brings comfortable experience to users, and therefore, compared with the prior art, the mobile phone with the AR has obvious technical progress.
Drawings
Fig. 1 is a flow chart one of the present invention.
Fig. 2 is a flow chart two of the present invention.
Fig. 3 is a flow chart three of the present invention.
Fig. 4 is a schematic diagram of the present invention.
Fig. 5 is a schematic diagram of the present invention.
Fig. 6 is a schematic diagram III of the present invention.
Fig. 7 is a schematic diagram of the present invention.
Fig. 8 is a schematic diagram five of the present invention.
Fig. 9 is a schematic diagram six of the present invention.
Fig. 10 is a schematic diagram seven of the present invention.
Fig. 11 is a schematic diagram seven of the present invention.
Detailed Description
The invention is further described below with reference to the drawings and embodiments.
As shown in fig. 1, 4 to 6, a location-based AR or MR social method includes the steps of:
a virtual image 20 of the user 10, associated with the user identity, and recorded in the system server;
the virtual image 20 is assigned a button attribute and is associated with a social window.
The virtual image 20 of the target user is superimposed at the location of the target user 10, and when the user reaches the location area S of the target user 10, the user acquires the virtual image 20 of the target user 10 through the system server and superimposes and displays it on the display interface 110 of the user device.
The user aims or clicks on the target user virtual image 20 in the display interface 110 to activate the social window, input social content, and send to the system server.
The system server receives the social content sent by the user and forwards the social content to the target user equipment 100;
the target user device 100 receives social content sent by a user and displays the social content on a social window of the device display interface 110;
the target user 10 inputs the replied social content through the social window and sends the replied social content to the system server; and the system server receives the reply social content sent by the target user and forwards the reply social content to a social window of a display interface of the user equipment for display.
The target user equipment receives social content sent by the user for the first time and further comprises the position and the virtual image of the user.
As shown in fig. 2, 7 to 10, a user captures body images U1 to U3 of a target user in a real scene through an image capturing unit, a system server sequentially superimposes virtual images 20-U1 to 20-U3 of each target user on a user equipment display interface 110 according to the sequence of the arrival of the user at the position areas S1 to U3 of the target user, performs image recognition on the body images U1 to U3 of the target user captured by the user, recognizes the positions of the body images U1 to U3 of the target user in the display interface of fig. 7 to 10, matches the positions of the user and the target users U1 to U3 obtained by the system server in fig. 8, and directs the virtual images 20-U1 to 20-U3 to the corresponding body images U1 to U3 or directly superimposes the virtual images on the corresponding body images as shown in fig. 6 to be associated with each other.
As shown in fig. 7 and 10, fig. 7 is a schematic diagram of an image of a target user captured by an image capturing unit at a user device side displayed in a user device display interface 110, and fig. 10 is a position diagram of the user and the target user acquired by a system server. The user U first enters the location area S1 of the target user U1, at this time, in fig. 8, the location of the user in the location area S1 of the target user U1 is obtained by the system server, and then the virtual image 20-U1 of the target user U1 is sent to the device interface 110 of the user U to be displayed in a superimposed manner, the body image U1 of the target user captured by the user is identified, the location of the body image U1 of the target user in the display interface of fig. 7 is identified, the location of the user and the location of the target user U1 obtained by the system server in fig. 10 are matched, and the virtual image 20-U1 is directed to the corresponding body image U1. The virtual images 20-U2 and 20-U3 of the target users U2 and U3 are not present when the user U has not entered the positions S2 and S3 of the target users U2 and U3.
As shown in fig. 8 to 10, the principle that the user U enters the location areas S2 and S3 of the target users U2 and U3 is the same as that described above, the system server sequentially superimposes virtual images 20-U2 and 20-U3 of each target user on the user device display interface 110 according to the sequence of the arrival of the user at the location areas S2 and S3 of the target users U2 and U3, performs image recognition on the body images U2 and U3 of the target users captured by the user, recognizes the locations of the body images U2 and U3 of the target users in the display interfaces of fig. 8 and 9, matches the locations of the user and the target users U2 and U3 acquired by the system server in fig. 10, directs the virtual images 20-U2 to the corresponding body images U2, and correlates the virtual images 20-U,3 to the corresponding body images U3.
In another embodiment of the present invention, as shown in fig. 3, a user captures a body image of a target user in a real scene through an image capturing unit, a system server performs image recognition on the body image of the target user captured by the user, matches a virtual image of the user registered by the system server, and directs the virtual image to a corresponding body image or directly superimposes the virtual image on the corresponding body image.
The virtual image of the user is a user head portrait recorded by the user on the system server. As shown in fig. 10.
The image recognition of the invention also comprises the image recognition of the sex of the body image of the target user.
Thus, in the embodiment shown in fig. 3, the accuracy of image recognition matching with the captured body image of the target user using the user head portrait registered in the system server as the virtual image is significantly improved, and the accuracy of recognition matching is appropriately improved by incorporating the recognition of the sex of the body image. In addition, the body image of the target user and the user information stored or recorded in the system can be searched and analyzed, and the body image of the target user and the virtual image of the target user can be used as the basis for carrying out identity recognition and matching, so that the matching accuracy is improved. For example, the target user may have a self-shot photo on the social platform of the system, and the system server may find that the photo matches the target user's body image captured by the user in an identifying manner.
The image recognition method further comprises the step of judging the distance between the user and the daily mark user and the distance between the target users according to the size and the position of the body image of the target users.
By adopting the image recognition method, the accuracy of matching the body image of the target user with the virtual image can be improved, and the problem of interference of matching the body image of the target user with the virtual image recognition caused by the fact that a non-system registered user enters a picture captured by the camera unit can be reduced.
In the system, the location area S of the target user can be set as a variable area with a radius as a variable, and the setting is selected by the user and the target user. The user can select the radius of the position area S of the target user from several meters to hundreds of meters or even thousands of meters, the position radius of the target user can be enlarged, the search area of the target user can be enlarged, more target users can be searched, and more virtual images of the target user can be displayed on the display interface. However, too many virtual images can cause difficulty in distinguishing, and only the virtual image of the target user closest to the user can be ranked in front. In order to facilitate practical operation, the initial value can be set to be about 10 meters.
After the user virtual image and the user body image are matched and correlated, the user body image is endowed with button attributes.
The virtual image buttons and the body image buttons of the target user are assigned different button attributes and represent different button functions.
The virtual image button and the body image button of the target user are assigned the same button attribute. Thus, whether clicking on a virtual image or a physical image of the target user, a social window may be opened or other actions may be performed.
For the acquisition of the position, the invention adopts the positioning technology and the positioning system in the prior art, including GPS positioning and Beidou positioning, including a geographic position map system. The geographic location map can be Google map, hundred degree map, tengxin map, goldmap, beidou map, etc. The invention can also obtain the user position by adopting WiFi, bluetooth, infrared and other hardware positioning technologies and systems. The position acquisition of the invention can be added into a gyroscope, an accelerometer and an electronic compass of user equipment as auxiliary positioning.
For the acquisition of the position, the invention can also scan the position mark on the position by the camera shooting unit and send the position information to the system server by the user equipment. The position acquisition method can be used for positioning the user position by adopting the indoor or outdoor positioning technology of the two-dimension code. The indoor or outdoor positioning technology of the two-dimension code can also be added with a gyroscope, an accelerometer and an electronic compass of user equipment as auxiliary positioning.
The above-mentioned location acquisition of the present invention adopts existing outdoor positioning and indoor positioning technologies, and the user equipment of the present invention includes, but is not limited to, a mobile phone, a tablet computer, an AR or an MR headset with an imaging unit, which are devices of the prior art, and are not described in detail herein.
The virtual image described in the present invention includes a two-dimensional 2D image and a three-dimensional 3D image. The virtual image comprises at least one of characters, pictures, graphics, images, colors, bar codes and two-dimensional codes.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that it will be apparent to those skilled in the art that variations and modifications can be made without departing from the spirit of the invention.

Claims (6)

1. A location-based AR or MR social method, characterized by:
the virtual image of the user is associated with the identity of the user and is recorded in a system server;
assigning the virtual image to a button attribute and associating with a social window;
when the user reaches the position area of the target user, the user acquires the virtual image of the target user through the system server and displays the virtual image in a superimposed manner on the display interface of the user equipment, specifically: the user captures the body image of the target user in the real scene through the camera shooting unit, the system server carries out image recognition on the body image of the target user captured by the user, matches with the virtual image of the user recorded by the system server, and directs the virtual image to the corresponding body image or directly superimposes the virtual image on the corresponding body image; after the target user virtual image and the body image of the target user are matched and correlated, the body image of the target user is endowed with button attributes;
the user aims or clicks a target user virtual image in the display interface to activate a social window, inputs social content and sends the social content to a system server;
the system server receives social content sent by a user and forwards the social content to target user equipment;
the target user equipment receives social content sent by a user and displays the social content on a social window of the equipment display interface;
the target user inputs the replied social content through the social window and sends the replied social content to the system server;
and the system server receives the reply social content sent by the target user and forwards the reply social content to a social window of a display interface of the user equipment for display.
2. The method of claim 1, wherein:
the image recognition further comprises the step of judging the distance between the user and the target user and the distance between the target user according to the size and the position of the body image of the target user.
3. The method of claim 1, wherein:
the image recognition further includes image recognition of the gender of the target user body image.
4. The method of claim 1, wherein:
and the virtual image of the target user is a user head portrait recorded by the user at the system server.
5. The method of claim 1, wherein:
the assignment of the virtual image button and the body image button of the target user to different button attributes represents different button functions or the assignment of the virtual image button and the body image button of the target user to the same button attributes.
6. The method of claim 1, wherein:
the target user equipment receives social content sent by the user for the first time and further comprises the position and the virtual image of the user.
CN201611106337.6A 2016-11-23 2016-11-23 AR or MR social method based on position Active CN108092950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611106337.6A CN108092950B (en) 2016-11-23 2016-11-23 AR or MR social method based on position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611106337.6A CN108092950B (en) 2016-11-23 2016-11-23 AR or MR social method based on position

Publications (2)

Publication Number Publication Date
CN108092950A CN108092950A (en) 2018-05-29
CN108092950B true CN108092950B (en) 2023-05-23

Family

ID=62170285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611106337.6A Active CN108092950B (en) 2016-11-23 2016-11-23 AR or MR social method based on position

Country Status (1)

Country Link
CN (1) CN108092950B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109768918B (en) * 2019-01-16 2022-06-24 北京众纳鑫海网络技术有限公司 Method and apparatus for implementing instant messaging
CN112565165B (en) * 2019-09-26 2022-03-29 北京外号信息技术有限公司 Interaction method and system based on optical communication device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320282A (en) * 2015-12-02 2016-02-10 广州经信纬通信息科技有限公司 Image recognition solution based on augmented reality

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130178257A1 (en) * 2012-01-06 2013-07-11 Augaroo, Inc. System and method for interacting with virtual objects in augmented realities
US9552668B2 (en) * 2012-12-12 2017-01-24 Microsoft Technology Licensing, Llc Generation of a three-dimensional representation of a user
JP5900393B2 (en) * 2013-03-21 2016-04-06 ソニー株式会社 Information processing apparatus, operation control method, and program
CN103297544B (en) * 2013-06-24 2015-06-17 杭州泰一指尚科技有限公司 Instant messaging application method based on augmented reality
CN103412953A (en) * 2013-08-30 2013-11-27 苏州跨界软件科技有限公司 Social contact method on the basis of augmented reality
GB201406695D0 (en) * 2014-04-14 2014-05-28 Shopchat Ltd Threaded messaging
US20160133230A1 (en) * 2014-11-11 2016-05-12 Bent Image Lab, Llc Real-time shared augmented reality experience
CN105323252A (en) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 Method and system for realizing interaction based on augmented reality technology and terminal

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320282A (en) * 2015-12-02 2016-02-10 广州经信纬通信息科技有限公司 Image recognition solution based on augmented reality

Also Published As

Publication number Publication date
CN108092950A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
US11127210B2 (en) Touch and social cues as inputs into a computer
US10163267B2 (en) Sharing links in an augmented reality environment
CN108616563B (en) Virtual information establishing method, searching method and application system of mobile object
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
WO2019096027A1 (en) Communication processing method, terminal, and storage medium
CN105450736B (en) Method and device for connecting with virtual reality
US20190333478A1 (en) Adaptive fiducials for image match recognition and tracking
US20130174213A1 (en) Implicit sharing and privacy control through physical behaviors using sensor-rich devices
CN111638796A (en) Virtual object display method and device, computer equipment and storage medium
US9392248B2 (en) Dynamic POV composite 3D video system
CN112074797A (en) System and method for anchoring virtual objects to physical locations
JP6720385B1 (en) Program, information processing method, and information processing terminal
KR20140108436A (en) System and method for exercise game of social network type using augmented reality
KR20150075532A (en) Apparatus and Method of Providing AR
KR20180100074A (en) Method and system for sorting a search result with space objects, and a computer-readable storage device
JP2011203984A (en) Navigation device, navigation image generation method, and program
CN109522503B (en) Tourist attraction virtual message board system based on AR and LBS technology
CN108092950B (en) AR or MR social method based on position
TW201823929A (en) Method and system for remote management of virtual message for a moving object
CN113470190A (en) Scene display method and device, equipment, vehicle and computer readable storage medium
CN112947756A (en) Content navigation method, device, system, computer equipment and storage medium
CN111651049B (en) Interaction method, device, computer equipment and storage medium
CN114647303A (en) Interaction method, device and computer program product
CN112788443B (en) Interaction method and system based on optical communication device
CN111639975A (en) Information pushing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TA01 Transfer of patent application right

Effective date of registration: 20230510

Address after: 107, 1st floor, Tsinghua Information Port scientific research building, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Shenzhen Facebook Technology Co.,Ltd.

Address before: 518000, No. 19 Haitian 1st Road, Nanshan District, Shenzhen, Guangdong Province

Applicant before: Jin Dekui

TA01 Transfer of patent application right