CN108092950A - A kind of location-based AR or MR social contact methods - Google Patents

A kind of location-based AR or MR social contact methods Download PDF

Info

Publication number
CN108092950A
CN108092950A CN201611106337.6A CN201611106337A CN108092950A CN 108092950 A CN108092950 A CN 108092950A CN 201611106337 A CN201611106337 A CN 201611106337A CN 108092950 A CN108092950 A CN 108092950A
Authority
CN
China
Prior art keywords
user
target user
image
virtual image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611106337.6A
Other languages
Chinese (zh)
Other versions
CN108092950B (en
Inventor
金德奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Facebook Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201611106337.6A priority Critical patent/CN108092950B/en
Publication of CN108092950A publication Critical patent/CN108092950A/en
Application granted granted Critical
Publication of CN108092950B publication Critical patent/CN108092950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides a kind of location-based AR or MR social contact methods, and the virtual image of user is associated with user identity and puts on record in system server, by the virtual image assign button attribute and with social windows associate.The virtual image of the target user is superimposed in target user position, when user reaches the band of position of target user, virtual image of the user by system server acquisition target user and the Overlapping display on the display interface of user equipment.User aims at or clicks on the social window of target user's virtual image activation in display interface, inputs social content, is sent to system server.System server receives the social content that user sends, then is forwarded to target UE.

Description

A kind of location-based AR or MR social contact methods
Technical field
The present invention relates to a kind of social contact method, more particularly to a kind of location-based AR or MR social contact methods.
Background technology
Augmented reality (Augmented Reality, abbreviation AR), it is arrived virtual Information application by computer technology Real world, real environment and virtual object have been added to same picture in real time or space exists simultaneously.Increase at present Strong real world devices and technology it is representational be Microsoft HoloLens holographic glasses, news information stream can be projected, received It sees video, checks weather, auxiliary 3d modelings assist simulation to log in Mars scene, simulation.It has succeeded well at will virtual and reality Combine, and realize more preferably interactive.Mixed reality (Mix reality, abbreviation MR), both including augmented reality and increasing New visible environment that is strong virtual, referring to merging reality and virtual world and generate.The physics in new visible environment It is coexisted with digital object, and real-time interactive.It is pure virtual that real (Mediated Reality) also abbreviation MR, VR, which will also be mediated, Digital picture, the Mixed Reality including AR are virtual digit picture+bore hole reality, and MR is that digitlization is real+virtual Digital picture.The country has one the company of easy pupil science and technology is made to be absorbed in the research and development of this block at present, is researching and developing MR glasses.Also one The very hot game Pokemon Go of money are produced jointly by Nintendo, Pok é mon companies and Niantic Labs companies of Google Reality enhancing (AR) pet of exploitation forms fight class RPG role playing game (Role-playing game) mobile phone games. Pokemon Go are a game that exploration capture, fight and exchange are carried out to the spirit that occurs in real world.Player can To find spirit in real world by smart mobile phone, arrested and fought.
Existing social contact method mainly has QQ, wechat and footpath between fields footpath between fields, these are all based on the social platform of internet, user It needs to register, then good friend is added by QQ number, WeChat ID and footpath between fields footpath between fields number, it is also existing that even wechat, which sweeps and sweeps addition good friend, The wechat Quick Response Code of mew good friend is swept in reality to add, and will initially set up face-to-face exchange relation ability plusing good friend again by flat Platform further exchanges, and for the stranger in reality to directly establish social networks be it is extremely difficult be also it is impossible , even if he or she just can see in face of you, it is also intended to the WeChat ID ability by obtaining other side after the understanding that talks face to face Wechat can be added.This is even more a kind of sorry for the beauty handsome boy that will brush past and want to establish social networks.Therefore, Between stranger in visual range or the rapid social networks of establishing in short range become in the urgent need to address at present The problem of.
The content of the invention
The technical problem to be solved in the invention is to provide a kind of location-based AR or MR social contact methods, overcomes existing Problem present in technology.The technical solution is as follows:
A kind of location-based AR or MR social contact methods, it is characterised in that:
The virtual image of user, it is associated with user identity, and put on record in system server;
By the virtual image assign button attribute, and with social windows associate;
The virtual image of the target user is superimposed in target user position, when user reaches the band of position of target user When, user by system server obtain target user virtual image and the Overlapping display on the display interface of user equipment;
User aims at or clicks on the social window of target user's virtual image activation in display interface, inputs social content, It is sent to system server;
System server receives the social content that user sends, then is forwarded to target UE;
Target UE receives the social content that user sends, and is shown on the social window of equipment display interface;
The social content that target user is replied by the input of social window, is sent to system server;
System server receives the reply social content that target user sends, then is forwarded to the society of user equipment display interface It hands over and is shown on window.
The present invention provides a kind of location-based AR or MR social contact methods, overcome problems of the prior art. The virtual image of user is associated with user identity and puts on record in system server, assigns the virtual image to button attribute And with social windows associate.The virtual image of the target user is superimposed in target user position, when user reaches target user's During the band of position, user obtains the virtual image of target user by system server and is folded on the display interface of user equipment Add display.User aims at or clicks on the social window of target user's virtual image activation in display interface, inputs social content, hair It is sent to system server.System server receives the social content that user sends, then is forwarded to target UE.Target user Equipment receives the social content that user sends, and is shown on the social window of equipment display interface.Target user passes through social activity The social content that window input is replied, is sent to system server.It is social that system server receives the reply that target user sends Content, then be forwarded on the social window of user equipment display interface and show.User is captured by camera unit in reality scene The body image of target user carries out image identification, with reference to user and target user in system position figure to the body image In relative position, the body image of target user with the virtual image of target user is matched, and virtual image is referred to It to corresponding body image or is directly superimposed in corresponding body image, identifies target user convenient for user, make the friendship between user It flows more convenient.The present invention need not add good friend and user equipment such as mobile phone can be used to establish social networks progress temporarily by AR Session each other with being added as a friend again after good opinion, makes framework of the present invention in visual range simple, easy to use, is brought to user Comfortable experience, therefore, the present invention have significant technological progress compared with prior art.
Description of the drawings
Fig. 1 is the flow chart one of the present invention.
Fig. 2 is the flowchart 2 of the present invention.
Fig. 3 is the flow chart 3 of the present invention.
Fig. 4 is the schematic diagram one of the present invention.
Fig. 5 is the schematic diagram two of the present invention.
Fig. 6 is the schematic diagram three of the present invention.
Fig. 7 is the schematic diagram four of the present invention.
Fig. 8 is the schematic diagram five of the present invention.
Fig. 9 is the schematic diagram six of the present invention.
Figure 10 is the schematic diagram seven of the present invention.
Figure 11 is the schematic diagram seven of the present invention.
Specific embodiment
The present invention will be further described with embodiment below in conjunction with the accompanying drawings.
As shown in Fig. 1, Fig. 4 to Fig. 6, a kind of location-based AR or MR social contact methods comprise the following steps:
The virtual image 20 of user 10, it is associated with user identity, and put on record in system server;
Assign the virtual image 20 to button attribute, and with social windows associate.
The virtual image 20 of the target user is superimposed in the position of target user 10, when user reaches the position of target user 10 When putting region S, user obtains the virtual image 20 of target user 10 and in the display interface of user equipment by system server Overlapping display on 110.
Target user's virtual image 20 that user aims at or clicks in display interface 110 activates social window, and input is social Content is sent to system server.
System server receives the social content that user sends, then is forwarded to target UE 100;
Target UE 100 receives the social content that user sends, and on the social window of equipment display interface 110 Display;
The social content that target user 10 is replied by the input of social window, is sent to system server;System server The reply social content that target user sends is received, then is forwarded on the social window of user equipment display interface and shows.
Target UE receives position and the virtual image that the social content that user sends further includes user for the first time.
As shown in Fig. 2, Fig. 7 to Figure 10, user captures the body image of target user in reality scene by camera unit U1~U3, system server reach the sequencing of target user U1~U3 band of position S1~S3 in user equipment according to user Display interface 110 is sequentially overlapped virtual image 20-U1~20-U3 of each target user, the target user's captured to user Body image U1~U3 carries out image identification, identifies body image U1~U3 of target user in Fig. 7~Figure 10 display interfaces Position, matched with the position of the user acquired in system server in Fig. 8 and target user U1~U3, and by virtual graph It is directly superimposed to as 20-U1~20-U3 is directed toward corresponding body image U1~U3 or as shown in Figure 6 in corresponding body image It is interrelated.
As illustrated in fig. 7 and fig. 10, the image that Fig. 7 is the target user of the camera unit capture at user equipment end is set in user The schematic diagram shown in standby display interface 110, Figure 10 are the location drawing of the user and target user acquired in system server.With Family U is initially entered in the band of position S1 of target user U1, and user is in the position of U-L1, system server in fig. 8 at this time The location information that user U enters the band of position S1 of target user U1 is got, then sends the virtual image 20- of target user U1 U1, to the equipment interface 110 of user U on Overlapping display, the body image U1 of the target user captured to user carries out image Identification, identifies positions of the body image U1 of target user in Fig. 7 display interfaces, acquired in system server in Figure 10 The position of user and target user U1 are matched, and virtual image 20-U1 is directed toward corresponding body image U1.Do not have also in user U Into target user U2 and U3 position S2 and S3 when, the virtual image 20-U2 and 20-U3 of target user U2 and U3 are will not to go out Existing.
As shown in Fig. 8 to Figure 10, user U enter principle in the band of position S2 and S3 of target user U2 and U3 with it is above-mentioned Identical, the sequencing that system server reaches the target user U2 and U3 bands of position S2 and S3 according to user is shown in user equipment Show that interface 110 is sequentially overlapped the virtual image 20-U2 and 20-U3 of each target user, to the body for the target user that user is captured Body image U2 and U3 carry out image identification, identify positions of the body image U2 and U3 of target user in Fig. 8 and Fig. 9 display interfaces It puts, is matched with the position of the user acquired in system server in Figure 10 and target user U2 and U3, by virtual image 20- U2 is directed toward corresponding body image U2, and by virtual image 20-U, the corresponding body image U3 of 3 directions is interrelated.
Another embodiment of the present invention is as shown in figure 3, user captures target user in reality scene by camera unit Body image, system server carries out image identification to the body image for the target user that user is captured, with system server The user's virtual image put on record is matched, and virtual image is directed toward corresponding body image or is directly superimposed to corresponding In body image.
The virtual image of user of the present invention is user's head portrait that user puts on record in system server.As shown in Figure 10.
Image identification of the present invention is further included carries out image identification to the gender of target user's body image.
Therefore, in the embodiment shown in fig. 3, using the user's head portrait put on record in system server as virtual image Image is carried out with the body image of the target user of capture and identifies that matched accuracy can significantly improve, and is added in body image Other identification can also properly increase the matched accuracy of identification.In addition, can also by the body image of target user in systems The user information for storing or putting on record carries out lookup analysis, is carried out as the virtual image of target user's body image and target user The matched foundation of identification, improves matched accuracy.For example, target user has sent out a self-timer in the social platform of system Photo, target user's body image that system server can find this photo with user captures, which is identified, to be matched.
Image identification of the present invention is further included judges user and day according to the size and location of target user's body image Mark the distance between distance and target user of user.
Image identification more than can improve the body image of target user and the matched accuracy of virtual image, may be used also It reduces because nonsystematic registered user enters target user's body image and virtual graph caused by the picture that camera unit captured As identification matching interference problem.
In systems, the band of position S of target user may be configured as the Variable Area using radius as variable, by user and mesh User is marked to select to set.User can by the radius of the band of position S of target user from several meters to hundreds of meters even several kilometer ranges Selection, region of search to target user can be expanded by expanding the position radius of target user, can search for more target users, More target user's virtual images can be shown on display interface.But excessive virtual image can cause to distinguish difficulty, can only The virtual image of the target user nearest from user is come front.It is used for the ease of practical operation, initial value can be set to exist 10 meters or so.
User's virtual image of the present invention matched with the body image of user completion and it is interrelated after, by user's Body image assigns button attribute.
The virtual image button of target user and body image button are assigned to different button attributes, represent it is different by Button function.
Assign the virtual image button of target user and body image button to same button attribute.So no matter click on The virtual image or body image of target user can open social window or carry out other operations.
Acquisition for position, of the present invention is location technology and system of the prior art, including GPS positioning It is positioned with the Big Dipper, including geographical location map system.Adoptable geographical location map is Google Maps, Baidu map, Tencent Map, Amap, Big Dipper map etc..The present invention can also be by using the hardware positionings such as WiFi, bluetooth, infrared technology and system To obtain user location.The position acquisition of the present invention can add in the gyroscope, accelerometer and electronic compass conduct of user equipment Auxiliary positioning.
Acquisition for position, the present invention can also scan the station location marker on position by the camera unit, and Location information is sent to by system server by user equipment.This location acquiring method can be used for the interior using Quick Response Code Or positioning of the outdoor positioning technology to user location.The indoor or outdoors location technology of the method Quick Response Code can also add in user and set Standby gyroscope, accelerometer and electronic compass is as auxiliary positioning.
It is existing outdoor positioning and indoor positioning technologies used by the position acquisition of invention described above, this hair The bright user equipment includes but not limited to mobile phone, tablet computer, AR or MR helmets with camera unit, is existing The equipment of technology, is not described in detail herein.
Virtual image of the present invention includes two dimension 2D images and three-dimensional 3D rendering.The virtual image includes but unlimited In character, picture, figure, image, color, bar code, Quick Response Code at least one.
Above-described is only the preferred embodiment of the present invention, it is noted that for those of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention Protect scope.

Claims (10)

1. a kind of location-based AR or MR social contact methods, it is characterised in that:
The virtual image of user, it is associated with user identity, and put on record in system server;
By the virtual image assign button attribute, and with social windows associate;
The virtual image of the target user is superimposed in target user position, when user reaches the band of position of target user, is used Family by system server obtain target user virtual image and user equipment display circle and upper Overlapping display;
User aims at or clicks on the social window of target user's virtual image activation in display interface, inputs social content, sends To system server;
System server receives the social content that user sends, then is forwarded to target UE;
Target UE receives the social content that user sends, and is shown on the social window of equipment display interface;
The social content that target user is replied by the input of social window, is sent to system server;
System server receive target user send reply social content, then be forwarded to user equipment show boundary and social window It is shown on mouthful.
2. the method as described in claim 1, it is characterised in that:
User captures the body image of target user in reality scene by camera unit, and system server reaches each according to user The sequencing of the target user band of position is sequentially overlapped the virtual image of each target user in user equipment display interface, to The body image of the target user that family is captured carries out image identification, with the user acquired in system server and target user Position is matched, and virtual image is directed toward corresponding body image or is directly superimposed to mutually close in corresponding body image Connection.
3. method as claimed in claim 2 or claim 3, it is characterised in that:
User captures the body image of target user in reality scene by camera unit, and system server captures user The body image of target user carries out image identification, is matched with user's virtual image that system server is put on record, and will Virtual image is directed toward corresponding body image or is directly superimposed to interrelated in corresponding body image.
4. method as claimed in claim 2, it is characterised in that:
When user's virtual image match with the body image of user completion and it is interrelated after, the body image imparting of user is pressed Button attribute.
5. method as claimed in claim 3, it is characterised in that:
When user's virtual image match with the body image of user completion and it is interrelated after, the body image imparting of user is pressed Button attribute.
6. method as claimed in claim 3, it is characterised in that:
Described image identification is further included carries out image identification to the gender of target user's body image.
7. method as claimed in claim 3, it is characterised in that:
Described image identification further include according to the size and location of target user's body image judge user and target user away from From and the distance between target user.
8. method as claimed in claim 3, it is characterised in that:
The virtual image of user of the present invention is user's head portrait that user puts on record in system server.
9. method as described in claim 4 or 5, it is characterised in that:
Assign different button attribute representatives different button work(on the virtual image button of target user and body image button Can or it assign target user virtual image button and body image button to same button attribute.
10. the method as described in claim 1, it is characterised in that:
Target UE receives position and the virtual image that the social content that user sends further includes user for the first time.
CN201611106337.6A 2016-11-23 2016-11-23 AR or MR social method based on position Active CN108092950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611106337.6A CN108092950B (en) 2016-11-23 2016-11-23 AR or MR social method based on position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611106337.6A CN108092950B (en) 2016-11-23 2016-11-23 AR or MR social method based on position

Publications (2)

Publication Number Publication Date
CN108092950A true CN108092950A (en) 2018-05-29
CN108092950B CN108092950B (en) 2023-05-23

Family

ID=62170285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611106337.6A Active CN108092950B (en) 2016-11-23 2016-11-23 AR or MR social method based on position

Country Status (1)

Country Link
CN (1) CN108092950B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109768918A (en) * 2019-01-16 2019-05-17 北京众纳鑫海网络技术有限公司 For realizing the method and apparatus of instant messaging
CN112565165A (en) * 2019-09-26 2021-03-26 北京外号信息技术有限公司 Interaction method and system based on optical communication device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130178257A1 (en) * 2012-01-06 2013-07-11 Augaroo, Inc. System and method for interacting with virtual objects in augmented realities
CN103297544A (en) * 2013-06-24 2013-09-11 杭州泰一指尚科技有限公司 Instant messaging application method based on augmented reality
CN103412953A (en) * 2013-08-30 2013-11-27 苏州跨界软件科技有限公司 Social contact method on the basis of augmented reality
US20140160123A1 (en) * 2012-12-12 2014-06-12 Microsoft Corporation Generation of a three-dimensional representation of a user
US20150295873A1 (en) * 2014-04-14 2015-10-15 Novastone Media Ltd Threaded messaging
CN105190477A (en) * 2013-03-21 2015-12-23 索尼公司 Head-mounted device for user interactions in an amplified reality environment
CN105320282A (en) * 2015-12-02 2016-02-10 广州经信纬通信息科技有限公司 Image recognition solution based on augmented reality
CN105323252A (en) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 Method and system for realizing interaction based on augmented reality technology and terminal
US20160133230A1 (en) * 2014-11-11 2016-05-12 Bent Image Lab, Llc Real-time shared augmented reality experience

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130178257A1 (en) * 2012-01-06 2013-07-11 Augaroo, Inc. System and method for interacting with virtual objects in augmented realities
US20140160123A1 (en) * 2012-12-12 2014-06-12 Microsoft Corporation Generation of a three-dimensional representation of a user
CN105190477A (en) * 2013-03-21 2015-12-23 索尼公司 Head-mounted device for user interactions in an amplified reality environment
CN103297544A (en) * 2013-06-24 2013-09-11 杭州泰一指尚科技有限公司 Instant messaging application method based on augmented reality
CN103412953A (en) * 2013-08-30 2013-11-27 苏州跨界软件科技有限公司 Social contact method on the basis of augmented reality
US20150295873A1 (en) * 2014-04-14 2015-10-15 Novastone Media Ltd Threaded messaging
US20160133230A1 (en) * 2014-11-11 2016-05-12 Bent Image Lab, Llc Real-time shared augmented reality experience
CN105323252A (en) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 Method and system for realizing interaction based on augmented reality technology and terminal
CN105320282A (en) * 2015-12-02 2016-02-10 广州经信纬通信息科技有限公司 Image recognition solution based on augmented reality

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109768918A (en) * 2019-01-16 2019-05-17 北京众纳鑫海网络技术有限公司 For realizing the method and apparatus of instant messaging
CN112565165A (en) * 2019-09-26 2021-03-26 北京外号信息技术有限公司 Interaction method and system based on optical communication device
CN112565165B (en) * 2019-09-26 2022-03-29 北京外号信息技术有限公司 Interaction method and system based on optical communication device

Also Published As

Publication number Publication date
CN108092950B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN107820593B (en) Virtual reality interaction method, device and system
EP3131263B1 (en) Method and system for mobile terminal to simulate real scene to achieve user interaction
TWI615776B (en) Method and system for creating virtual message onto a moving object and searching the same
CN106920079A (en) Virtual objects distribution method and device based on augmented reality
CN105279795B (en) Augmented reality system based on 3D marker
WO2019096027A1 (en) Communication processing method, terminal, and storage medium
CN111638796A (en) Virtual object display method and device, computer equipment and storage medium
CN108573201A (en) A kind of user identity identification matching process based on face recognition technology
WO2019059992A1 (en) Rendering virtual objects based on location data and image data
CN106127552B (en) Virtual scene display method, device and system
CN103248810A (en) Image processing device, image processing method, and program
JP6720385B1 (en) Program, information processing method, and information processing terminal
KR101738443B1 (en) Method, apparatus, and system for screening augmented reality content
CN110033293A (en) Obtain the method, apparatus and system of user information
WO2019109828A1 (en) Ar service processing method, device, server, mobile terminal, and storage medium
CN112419388A (en) Depth detection method and device, electronic equipment and computer readable storage medium
CN108242017A (en) A kind of location-based comment interaction systems and method
CN110160529A (en) A kind of guide system of AR augmented reality
CN104501797B (en) A kind of air navigation aid based on augmented reality IP maps
CN108092950A (en) A kind of location-based AR or MR social contact methods
CN104501798A (en) Network object positioning and tracking method based on augmented reality IP map
TW201823929A (en) Method and system for remote management of virtual message for a moving object
KR102022912B1 (en) System for sharing information using mixed reality
CN113470190A (en) Scene display method and device, equipment, vehicle and computer readable storage medium
CN112788443B (en) Interaction method and system based on optical communication device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TA01 Transfer of patent application right

Effective date of registration: 20230510

Address after: 107, 1st floor, Tsinghua Information Port scientific research building, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Shenzhen Facebook Technology Co.,Ltd.

Address before: 518000, No. 19 Haitian 1st Road, Nanshan District, Shenzhen, Guangdong Province

Applicant before: Jin Dekui

TA01 Transfer of patent application right