CN114119171A - MR/AR/VR shopping and retrieval scene control method, mobile terminal and readable storage medium - Google Patents

MR/AR/VR shopping and retrieval scene control method, mobile terminal and readable storage medium Download PDF

Info

Publication number
CN114119171A
CN114119171A CN202111460735.9A CN202111460735A CN114119171A CN 114119171 A CN114119171 A CN 114119171A CN 202111460735 A CN202111460735 A CN 202111460735A CN 114119171 A CN114119171 A CN 114119171A
Authority
CN
China
Prior art keywords
information
image
data
intelligent glasses
article
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111460735.9A
Other languages
Chinese (zh)
Inventor
王雪燕
黄正宗
王亮
陈霖
张玉江
蔡雍稚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Kedun Technology Co ltd
Original Assignee
Zhejiang Kedun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Kedun Technology Co ltd filed Critical Zhejiang Kedun Technology Co ltd
Priority to CN202111460735.9A priority Critical patent/CN114119171A/en
Publication of CN114119171A publication Critical patent/CN114119171A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

A control method for an MR/AR/VR shopping and retrieval scene, a mobile terminal and a readable storage medium acquire images in a visual field through a sensing module of intelligent glasses, the images are identified through an algorithm, corresponding article types are identified, associated information is retrieved from a network database and a built-in database, imaging of the associated information is performed through an imaging device of the intelligent glasses, and data association and binding are performed by acquiring visual images in the visual field in real time, so that the problems of complexity in retrieving by a mobile phone and inaccuracy in retrieving are solved.

Description

MR/AR/VR shopping and retrieval scene control method, mobile terminal and readable storage medium
Technical Field
The application relates to the technical field of augmented reality, in particular to a method for controlling a shopping and retrieval scene of MR/AR/VR, a mobile terminal and a readable storage medium.
Background
At present, VR glasses based on the meta universe have met with a wave of heat, and more companies and enterprises announce the line of joining the meta universe to perform research and development of related products. VR glasses are mainly used as immersion and full-virtual products in the Yuanzhou, but because the full-virtual world platform is created, users can easily indulge in the virtual world and escape from the real world, and adverse effects of reducing the production efficiency and the production value of everyone are easily caused.
Meanwhile, situations of high actual recurring cost and long construction period of conception or design often occur in real life, more and more people can choose to swim on the internet due to lack of interactivity and interestingness in the real world, and more people can choose to shop and take out on the internet due to high action cost and retrieval cost in the real world, which are caused by the fact that the real world and the virtual world are not interconnected and communicated.
Therefore, the system and the method for fusing the virtual world and the real world on the same platform can not only prevent the user from being immersed in the virtual world, but also reduce the realization cost of the real world and improve the interestingness, the interactivity and the pertinence of the real world, and the technology, the system and the application can really enable people to experience real life and enjoy the convenience of virtual life.
Disclosure of Invention
An MR/AR/VR shopping scene control method comprises the following steps:
s1, switching or selecting an augmented reality interface entering a shopping scene through an interface on intelligent glasses;
s2, acquiring an image in a visual field through a sensing module of the intelligent glasses, identifying the image through an algorithm, and identifying the corresponding article type;
and S3, searching information related to the article types in the network database and the built-in database, and imaging the related information through an imaging device of the intelligent glasses.
The associated information modeling steps are as follows:
s4, establishing a three-dimensional model of the article through the PC port, the mobile port and the intelligent glasses port, wherein the three-dimensional model includes but is not limited to the three-dimensional structure, physical characteristics and the like of the article;
and S5, setting and starting feature image information of the three-dimensional model through a port, and simultaneously setting specific gesture information for operating the three-dimensional model.
The three-dimensional model control steps are as follows:
s6, acquiring an image in a visual field through a sensing module of the intelligent glasses, and identifying the image through an algorithm to identify a corresponding article;
s7, presenting a three-dimensional model of a corresponding article through an imaging device of the intelligent glasses, acquiring an image in a visual field through a sensing module of the intelligent glasses, identifying whether characteristic image information or specific gesture information exists in the image, if the characteristic image information exists, completing size and state adjustment of the three-dimensional model according to set physical characteristics according to the characteristic image information, and superposing the adjusted three-dimensional model on the characteristic image through the imaging device to perform matching and fitting of key adaptation points to complete use of the three-dimensional model; and if the specific gesture information exists, adjusting the size, the shape, the color and the display position of the three-dimensional model according to the preset specific gesture.
The screening of the related information before the imaging of the related information is performed by the imaging device of the smart glasses in S3 includes the following steps:
s8, capturing eyeball information through a rear camera, acquiring an optical focus of an eyeball at the current moment, projecting the focus onto an image acquired by a sensing module, and determining whether the focus is in an identified object range;
and S9, if the focus is in the range of the identified certain object and the stay time of the focus exceeds a set threshold, displaying the associated information of the object through the imaging device, otherwise, not displaying the associated information of the related object.
The imaging selection of the associated information comprises the following steps: the identification of whether the article is a three-dimensional object or a two-dimensional object in the real world is completed through the image information acquired by the sensing module, if the article is identified as the three-dimensional object, the article information is imaged, and if the article is identified as the two-dimensional object, the three-dimensional model is imaged.
The imaging selection of the associated information comprises the following steps: and finishing article identification through the image information acquired by the sensing module, and preferentially imaging the three-dimensional model if the identified article is marked as an article which cannot be used in a contact manner by the system or is marked as an article of which the internal structure needs to be viewed by the system.
An MR/AR/VR retrieval scene control method comprises the following steps:
s10, switching or selecting an augmented reality interface entering a retrieval scene through an interface on the intelligent glasses;
s11, acquiring an image in a visual field through a sensing module of the intelligent glasses, identifying the image through an algorithm, identifying characteristic points in the image, completing locking of a characteristic object, and uploading data of the characteristic object to a server through a communication module;
and S12, information related to the characteristic objects stored in the network database and the built-in database in the server is transmitted back to the intelligent glasses through the communication module, and imaging of the related information is performed through an imaging device of the intelligent glasses.
The related information comprises character data, voice data and image data uploaded by other users to the characteristic object, and character data, voice data and image data uploaded by other ports to the characteristic object.
A mobile terminal comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of a control method.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the control method.
Drawings
FIG. 1 is a hardware logic block diagram of an augmented reality based smart eyewear system of the present application;
FIG. 2 is a hardware logic block diagram of the smart eyewear system interconnected with a mobile phone according to the present application;
FIG. 3 is a diagram of internal logic according to an embodiment of the present application;
FIG. 4 is an external presentation of an embodiment of the present application;
FIG. 5 is a second interface presentation diagram according to an embodiment of the present application;
FIG. 6 is a diagram of an outer frame of an embodiment of the present application;
FIG. 7 is a second internal logic diagram of the present application;
FIG. 8 is a presentation diagram of a third interface in accordance with an embodiment of the present application;
FIG. 9 is a diagram of a third external frame according to an embodiment of the present application;
FIG. 10 is a diagram of the four internal logics of the embodiment of the present application;
fig. 11 is a diagram of five internal logics according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
As shown in fig. 1, an augmented reality-based smart glasses system hardware includes a plurality of VR/AR/MR smart glasses access devices, a server, and a plurality of multi-layered internet areas based on the server, wherein the VR/AR/MR smart glasses access devices are connected to the server through wireless communication, the server runs the plurality of multi-layered internet areas, the internet areas can be regarded as slices of a virtual world, the plurality of slices can be overlapped to combine into new slices in a specific number, and a user can select a slice to project through the VR/AR/MR smart glasses access device. When VR/AR/MR intelligent glasses access equipment acquires data of the internet area, namely, information retrieval and information interaction functions are carried out: the server screens and classifies real-time data uploaded by the VR/AR/MR intelligent glasses access equipment according to target information data corresponding to the selected slice, specific data are reserved, the VR/AR/MR intelligent glasses access equipment and the slice on the server complete association and interaction of the specific information data, and the slice on the server transmits multidimensional data corresponding to the information data to the VR/AR/MR intelligent glasses access equipment for projection according to confirmation of the specific information data. VR/AR/MR intelligent glasses access equipment contain sensor and a plurality of data input equipment of a plurality of different types, carry out data upload at VR/AR/MR intelligent glasses access equipment to the internet area, when carrying out information issuing and information mark function promptly: capturing current environmental information and character action information through the sensor, inputting through the data input device, uploading obtained data to the server, classifying and screening the data by the slice on the server, storing the screened data, retrieving the data under the slice in the Internet area, interacting the data, and projecting the data as specific information data to VR/AR/MR intelligent glasses access equipment which establishes specific information data association and interaction with the server.
As shown in fig. 2, an augmented reality-based smart glasses system hardware includes a plurality of smart glasses access devices, a smart phone, a server and a plurality of multilayer internet areas based on the server, wherein the smart glasses access device connect through bluetooth with the smart phone establish a data channel, the smart phone link to each other with the server through wireless communication, the server on operate a plurality of multilayer internet areas, the internet area can be regarded as the section of a certain virtual world, a plurality of sections can superpose and make up into new section with specific quantity, the user accessible the smart phone select a certain section to carry out the show of this section information data on the APP interface. The intelligent glasses comprise a front sensing module, and when the intelligent mobile phone acquires data of the internet area, namely, the information retrieval and information interaction functions are performed: the method comprises the steps that video information data are obtained in real time through a sensing module, the video information data are transmitted to an APP of the smart phone through Bluetooth connection, data connection between an APP client and an Internet area is established through wireless communication, a server screens and classifies the real-time video information data uploaded by the APP client according to target information data corresponding to a selected slice, specific data are reserved, the smart phone and the slice on the server complete association and interaction of the specific video information data, and the slice on the server transmits multidimensional data corresponding to the information data to the smart phone according to confirmation of the specific information data and displays the multidimensional data in the APP client. When the smart phone uploads data to the internet area, namely, information publishing and information marking functions are performed: data input is carried out through the smart phone, the obtained data are uploaded to the server, the slice on the server classifies and screens the data, the screened data are stored and can be retrieved and interacted under the slice of the internet area, and the data can be presented on an APP interface of the smart phone as specific information data.
In a first embodiment, as shown in fig. 3, an augmented reality-based intelligent glasses system software includes multiple scenes, where the multiple scenes are divided into different functional scenes, where the functional scenes include a message leaving scene, an authoring scene, an interaction scene, a biological scene, a shopping scene, a retrieval scene, a push scene, a joint scene, a design scene, a tagging scene, a friend-making scene, a navigation scene, and a live broadcast scene, but are not limited to the above functional scenes, and each functional scene is visually presented on an intelligent glasses in a switching or overlapping manner; each of the functional scenes may be divided into different theme scenes, the theme scenes include scenes corresponding to different characters, games, movies, and animations, but are not limited to the theme scenes, the theme scenes are the slices, the slices may be superimposed, and a specific number of the slices may be combined to form a new slice, that is, the theme scenes may be individually visually presented on the smart glasses or combined to be visually presented on the smart eyes. If the number of the theme scenes is N, the user can switch the theme scenesThe upper limit of the number is: (2N-1)。
As shown in fig. 4, in an embodiment, hardware of an augmented reality-based smart glasses system includes a smart glasses body 101 and an interactive control device 102, the smart glasses body 101 establishes a data connection with the interactive control device 102, and a user can complete switching of virtual scenes on the smart glasses body 101 through the interactive control device 102, where the virtual images on the smart glasses body 101 include scene tags: the scene one 201, the scene two 202, the scene three 203, and the scene four 204 … … realize visual switching between scenes, and further, the scene tags may have lower tags for expanding the lower sub-scenes of a certain scene, and the interactive control device 102 realizes switching and drilling of scenes.
Further, each scene in the data background corresponds to a usage rate, at least one feature standard and a set of AR parameters. The ordering modes of the scene one, the scene two, the scene three and the scene four … … are ordered from high to low according to the corresponding utilization rate of each scene. Wherein, the usage calculation formula may be:
Figure BDA0003389803260000071
where t (i) is the usage duration of the ith scene,
Figure BDA0003389803260000072
the total length of time that the user uses the smart glasses.
Furthermore, in order to gradually adapt to the user habit for data updating service, the specific method is as follows:
s52, establishing a user database to store the characteristic data of the user, such as the use duration of each scene, the use rate of each scene, or the use frequency of each scene;
and S53, updating the feature data of the user once every time, and sequencing scenes from high to low according to the feature data.
Furthermore, in order to better match the user requirements for personalized customized services, the specific method is as follows:
s54, classifying the characteristic data of all the users in the background;
s55, establishing user figures according to the classified user characteristic data, wherein each type of user figure corresponds to a certain user characteristic data interval and corresponds to a certain scene sequencing;
and S56, finishing the classification of the user on the user image according to the operation of the new user, and executing the scene sequencing corresponding to the user image.
Furthermore, the autonomous switching and selection of scenes can be realized, and the specific method is as follows:
s57, analyzing and detecting the ambient environment data to obtain at least one main characteristic of the ambient environment;
and S58, comparing the characteristic standard of each scene with the main characteristic, and if matching is completed, automatically switching the scene to the scene corresponding to the characteristic standard matched with the main characteristic.
In the autonomous switching and selecting of the scene, the feature standard may be set to be one third or more of the ratio of the portrait to the full image, or may be other ratios, such as one fourth, one fifth, etc., set by the manufacturer.
For example, the scene mode often used by college student users is mainly the audio-visual mode. Among all the scene mode usage rates, the usage rate of the audiovisual mode may be 45%, the usage rate of the chat social may be 35%, and the usage rate of the other modes may be 20%. In addition, for the device to which the scene panel according to the embodiment of the present invention is applied as the automatic selection method, the usage rate, the feature standard, and the model parameter of each scene mode can be stored in advance according to the frequency setting and the probability for the habit of using each scene mode by a general person (or the main consumer group of the AR/VR/MR device) at the time of factory shipment. When the AR/VR/MR device is used, the main features of the surrounding environment are captured and analyzed by the camera device, and the main features of the 3D image data are compared with the stored feature standards of various scenes according to the sequence of the usage rate so as to obtain a scene mode corresponding to the main features. In other words, since the user of the AR/VR/MR apparatus has a habit of live broadcasting in a live broadcasting scene-single indoor live broadcasting, the comparison sequence is determined by the usage rate of the scene mode, so that the comparison times can be reduced, and the speed of automatically selecting the scene mode can be increased. For example: the common scene mode of the general people is mainly the audio-visual mode and the social mode of chat, and then the office mode and other modes. Therefore, when the main features of the environment are obtained, the feature standard of the audio-visual mode is compared. And if the main characteristics of the environment are different from the characteristic standard of the audio-visual mode, comparing the characteristic standard of the chat social mode. If the main feature of the environment is not the same as the feature standard of the chat social mode, the office mode and other modes are sequentially compared. And when a scene mode which is the same as the main characteristics of the environment is obtained, replacing the current VR parameter by the VR parameter of the scene mode. Therefore, the comparison is performed by using the utilization rates of various scenes of a common person, so that the time for searching the comparison can be reduced, and the user can immediately improve the experience of the user by using the VR parameters which are most consistent with the scenes.
In a second embodiment, as shown in fig. 5, a visual scene superposition state of an augmented reality-based intelligent glasses system under a message leaving function can superpose visual imaging of voice data and text data when a real scene passes through lenses of the intelligent glasses, and intelligent glasses system hardware for realizing the message leaving function is shown in fig. 6, and includes an intelligent glasses body one 301, an imaging device one 401, a perception module one 402, a voice input device one 403, a text input device one 404, a positioning module one 405, a communication module one 406, and a server one 407. The first imaging device 401, the first sensing module 402, the first voice input device 403, the first text input device 404, the first positioning module 405 and the first communication module 406 are respectively in data connection with the first intelligent glasses body 301, and the first communication module 406 and the first server 407 are in data connection in a remote communication mode. The first sensing module 402 may be a camera or a laser radar.
As shown in fig. 7, the method for controlling the augmented reality-based smart glasses system with message leaving function in the second embodiment is as follows:
s1, switching or selecting an augmented reality interface entering a message leaving scene through an interface on intelligent glasses;
s2, acquiring real-time GPS information and image information through a positioning module and a sensing module on the intelligent glasses;
s3, uploading the GPS information and the image information to a server through a communication module, and matching the GPS information and the image information attached to the voice data and the character data in the historical data stored in the server;
and S4, after the matching is successful, the server performs data return on the voice data and the character data which are matched correspondingly in the historical data, receives the data through a communication module of the intelligent glasses, and displays the data through an imaging device of the intelligent glasses.
Furthermore, in order to reduce the matching times and improve the matching speed in the matching process of the historical data and the real-time data, the historical data is preprocessed, and the method comprises the following steps:
s5, dividing the park according to one or more information in the GPS information and the image information attached to the voice data and the text data in the historical data, and determining one or more information ranges in the GPS information and the image information of the corresponding park;
and S6, classifying the voice data and the character data in the historical data according to a defined range, and finishing data division with the park as a main body.
The partition of the campus in S5 may specifically be:
s51, primarily dividing the garden according to the GPS information, wherein the garden 1, the garden 2, the garden 3 and the garden 4 correspond to a GPS range respectively;
s52, secondary division is carried out on the garden according to the image information acquired by the sensing module, image characteristic quantities or markers are identified and extracted, and labeling is completed on each garden characteristic quantity or marker, wherein the garden comprises 1 part of a mark point 1, 1 part of a park point 2, 1 part of a park point 3, 2 parts of a park point 1, 2 parts of a park point 2, 2 parts of a park point 3, 3 parts of a park point 1, 3 parts of a park point 2 and 3 parts of a park point 3.
The method for matching the preprocessed data is as follows:
s7, matching the voice data and the character data in the historical data of the server with the GPS information and the image information of each block of the park according to the attached GPS information and image information;
s8, if the matching is finished, the voice data or the text data are transferred into the matching park block;
s9, matching the real-time GPS information and the real-time image information on the intelligent glasses with the GPS information and the image information of each block of the park;
and S10, if the matching is finished, virtually imaging the voice data and the text data in the park block to the intelligent glasses.
Further, in order to complete the update of the historical data, the real-time data is captured, and the method comprises the following steps:
s11, acquiring voice data and character data through a voice input device and a character input device of the intelligent glasses, wherein the voice data and the character data comprise GPS information and image information attached to the voice data and the character data, and acquiring real-time information through a positioning module and a sensing module;
and S12, uploading the voice data, the character data and the attached GPS information and image information to a historical database of the server.
In a third embodiment, as shown in fig. 8, a visual scene overlaying state of the smart glasses system based on augmented reality under the shopping function can overlay visual imaging of related information data when a real scene passes through lenses of the smart glasses, and as shown in fig. 9, the hardware of the smart glasses system for realizing the shopping function includes a smart glasses body four 801, an imaging device four 901, a perception module four 902, a communication module four 906, and a server four 907. The imaging device IV 901, the sensing module IV 902 and the communication module IV 906 are respectively in data connection with the intelligent glasses body IV 801, and the communication module IV 906 and the server IV 907 are in data connection in a remote communication mode. The sensing module four 902 can be a camera or a laser radar. Furthermore, in order to capture the display of the position detail shielding irrelevant information data where the human eyes stay, a rear camera four 905 is arranged, and the rear camera four 905 and the intelligent glasses body four 801 establish data connection; furthermore, in order to complete the operation experience of the product, a fourth motion capture device 903 is arranged, and the fourth motion capture device 903 is in data connection with the fourth intelligent glasses body 801; furthermore, in order to complete the virtual modeling of the product, an image modeling and trigger setting port four 802 is provided, and the image modeling and trigger setting port four 802 establishes data connection with the server four 907.
In the third embodiment, the method for controlling the intelligent glasses system with the shopping function based on the augmented reality comprises the following steps:
s39, switching or selecting an augmented reality interface entering a shopping scene through an interface on the intelligent glasses;
s40, acquiring an image in a visual field through a sensing module of the intelligent glasses, identifying the image through an algorithm, and identifying the corresponding article type;
s41, searching information related to the article types in the network database and the built-in database, and imaging the related information through an imaging device of the intelligent glasses.
The related information in S41 may be encyclopedia introduction of the item and shopping links of each platform. Further, the shopping links are sorted by the value of the indicator, which may be the price of the item.
In S41, the associated information may be obtained by labeling the article through another PC port, a mobile port, or an intelligent glasses port, and performing text labeling or link labeling on the image data.
Further, in order to make the presentation more accurate, when the corresponding type of the article is identified as described in S40, the logo information on the article is extracted, the identification of the brand and model of the article is completed, the characteristic information of the article is described in the introduction as described in S41, and the link of the same brand or the same model of the same brand is used as the primary index of the ranking in the ranking.
Further, the related information may be a three-dimensional model in order to experience a usage experience of the device that is difficult to use and understand an internal structure of the device that is difficult to detach. The method for acquiring the three-dimensional model comprises the following steps:
s411, establishing a three-dimensional model of an article through a PC port, a mobile port and an intelligent glasses port, wherein the three-dimensional model comprises but is not limited to the three-dimensional structure, physical characteristics and the like of the article;
s412, feature image information of the three-dimensional model is started through port setting, and meanwhile specific gesture information for operating the three-dimensional model is set.
Wherein the control method for operating the three-dimensional model is as follows:
s413, acquiring an image in a visual field through a sensing module of the intelligent glasses, and identifying the image through an algorithm to identify a corresponding article;
s414, presenting a three-dimensional model of a corresponding article through an imaging device of the intelligent glasses, acquiring an image in a visual field through a sensing module of the intelligent glasses, identifying whether characteristic image information or specific gesture information exists in the image, if so, completing size and state adjustment of the three-dimensional model according to set physical characteristics according to the characteristic image information, and superposing the adjusted three-dimensional model on the characteristic image through the imaging device to perform matching and fitting of key adaptation points to complete use of the three-dimensional model; and if the specific gesture information exists, adjusting the size, the shape, the color and the display position of the three-dimensional model according to the preset specific gesture. Further, to prevent a false touch adjustment, locking of the three-dimensional model may be set.
Furthermore, in order to prevent the information from being displayed visually in an explosive manner, the imaging of the related information by the imaging device of the smart glasses in S41 requires screening the related information, and the screening method includes:
s415, capturing eyeball information through a rear camera, acquiring an optical focus of an eyeball at the current moment, projecting the focus onto an image acquired by a sensing module, and determining whether the focus is in an identified object range;
and S416, if the focus is in the range of the identified certain object and the stay time of the focus exceeds a set threshold, displaying the related information of the object through the imaging device, otherwise, not displaying the related information of the related object.
Further, to prevent the system from not knowing which of the three-dimensional model and item information described above needs to be imaged, preconditions are set to determine what is imaged in the shopping scene. The precondition can be set as the following three schemes:
the precondition scheme I: acquiring user operation information through the port to complete the selection of imaging content;
precondition scheme two: the identification of whether the article is a three-dimensional object or a two-dimensional object in the real world is completed through the image information acquired by the sensing module, if the article is identified as the three-dimensional object, the article information is imaged, and if the article is identified as the two-dimensional object, the three-dimensional model is imaged;
a precondition scheme III: and finishing article identification through the image information acquired by the sensing module, and preferentially imaging the three-dimensional model if the identified article is marked as an article which cannot be used in a contact manner by the system or is marked as an article of which the internal structure needs to be viewed by the system.
In a fourth embodiment, as shown in fig. 10, a visual scene overlaying state of the smart glasses system based on augmented reality under the retrieval function can overlay visual imaging of related retrieval data when a real scene passes through lenses of the smart glasses, and hardware of the smart glasses system for realizing the retrieval function is the same as that of the second embodiment. Furthermore, in order to make the fourth embodiment and the third embodiment have the function of information screening, a rear camera can be arranged on the smart glasses.
In the fourth embodiment, the method for controlling the smart glasses system with the search function based on the augmented reality includes:
s42, switching or selecting an augmented reality interface entering a retrieval scene through an interface on the intelligent glasses;
s43, acquiring an image in a visual field through a sensing module of the intelligent glasses, identifying the image through an algorithm, identifying characteristic points in the image, completing locking of characteristic objects, and uploading data of the characteristic objects to a server through a communication module;
and S44, information related to the characteristic objects stored in the network database and the built-in database in the server is transmitted back to the intelligent glasses through the communication module, and imaging of the related information is performed through an imaging device of the intelligent glasses.
The associated information may include multi-dimensional data such as text data, voice data, and image data uploaded by other users for the feature object, and multi-dimensional data such as text data, voice data, and image data uploaded by other ports for the feature object.
Taking the real-scene imaging of the street scene of the shop as an example, the multi-dimensional data such as the character data, the voice data, the image data and the like uploaded by the other users to the feature object comprise historical evaluation information released by the users after the consumption of the shop or real-time friend-making information released by the users when the shop stays; the multidimensional data such as character data, voice data, image data and the like uploaded by the other ports to the feature objects comprise information uploaded by a merchant to services and products offered and bought in a store, information uploaded by the merchant to activities and preferential offers held in the store, or three-dimensional image information uploaded by the merchant to brand mascot and brand speakers.
Taking book or movie poster live-action imaging as an example, the multi-dimensional data such as text data, voice data, image data and the like uploaded by the other users to the feature object comprise book comments or film comments issued by the users after reading or watching the film, or real-time friend-making information issued by the users after reading or watching the film; the multi-dimensional data such as character data, voice data, image data and the like uploaded by the other ports to the feature object comprise reading activities or viewing activities uploaded by an organizer, or peripheral commodities of books or movies uploaded by a merchant, or creation minds uploaded by an author.
Further, in the step S44, the imaging position of the associated information is the same as or shifted to a specific direction from the position where the feature object scene is imaged by the lens when the imaging device of the smart glasses is used for imaging.
Further, in order to achieve accurate positioning of the aspect object, in S43, the user position may be obtained by the positioning module, and the locking of the aspect object may be performed in combination with the image information.
Furthermore, in order to prevent the explosive visual display of the information, the shielding of the information is completed by folding or screening.
The folding method specifically comprises the following steps: the related information is collected and expanded through user operation, prompting and marking of the related information can be carried out on the lens during collection, and imaging of the related information is completed during expansion.
The screening method specifically may be screening the associated information during imaging:
s441, capturing eyeball information through a rear camera, acquiring an optical focus of an eyeball at the current moment, projecting the focus onto an image acquired by a sensing module, and determining whether the focus is in the range of the identified characteristic object;
and S442, if the focus is in the range of the identified certain characteristic object and the stay time of the focus exceeds a set threshold, displaying the related information of the characteristic object through the imaging device, otherwise, not displaying the related information of the characteristic object.
Furthermore, in order to create a database for information retrieval, a retrieval model library is created according to data actively acquired by the system and data uploaded by a user, and the retrieval model library comprises the following types: a one-dimensional database, a two-dimensional database, and a three-dimensional database.
The data in the three-dimensional database can be established through all real existing, historical, virtual and exclusive objects in human society and nature, and can also be a concept or model which is created and imagined by a user on the basis of the data. The two-dimensional database can be obtained by mapping or slicing data of the three-dimensional database, or can be actively created and uploaded by a user; the one-dimensional database can be obtained by mapping or slicing data of the two-dimensional database, or can be actively created and uploaded by a user.
The data mapping or slicing may specifically be, for example, three-dimensional data:
according to the scheme I, corresponding data information is extracted from any section of the three-dimensional data;
and secondly, converting the three-dimensional data into two-dimensional data information through certain function conversion.
In the retrieval model library, the uploaded feature object data is retrieved, and retrieval results are arranged and displayed on space according to matching degrees, such as structure matching degree, appearance matching degree, principle matching degree, information matching degree and the like.
The display of the search result can be combined display of the search result in a one-dimensional database, a two-dimensional database and a three-dimensional database; and one or more of the first-dimensional database, the second-dimensional database and the third-dimensional database can be displayed by the user through free selection of the search library.
Furthermore, the retrieval function can not only be combined with a network database for retrieval, but also be combined with a local database for retrieval, wherein the local database can be a digital twin world which completes local real world mapping, taking a digital local database of a library as an example: after the user enters the field of the library, the user can search in a local database of the library, after a search target is confirmed, the system generates a three-dimensional indication mark to guide the user to find the position of the target, and after the target is identified, the user can obtain corresponding information through marks of other users on the target.
In a fifth embodiment, as shown in fig. 11, a visual scene overlaying state of the smart glasses system based on augmented reality under the push function can overlay visual imaging of related push data when a real scene passes through lenses of the smart glasses, and other hardware of the smart glasses system implementing the push function is the same as the second embodiment except that a positioning module is not required.
In the fifth embodiment, the method for controlling the augmented reality-based smart glasses system having the push function includes:
s52, switching or selecting an augmented reality interface entering a push scene through an interface on the intelligent glasses;
s53, acquiring images in a visual field through a sensing module of the intelligent glasses, completing identification of video content to position the content, uploading information data of specific positioning content to a server through a communication module according to system setting, and transmitting the periphery of built-in advertisements, information and content back to the intelligent glasses through the server;
and S54, the intelligent glasses carry out visual presentation on the periphery of the pushed advertisements, the pushed information and the pushed contents through the imaging module.
Furthermore, in order to increase the interest and interactivity of the video content, a comment block can be added to complete the discussion of the video content among the users of the intelligent glasses.
A visual scene superposition state of an intelligent glasses system based on augmented reality can be used for superposing visual imaging of related comprehensive data when a real scene passes through lenses of the intelligent glasses, and the intelligent glasses with the friend making function are one or combination of the above embodiments to complete data interaction among the intelligent glasses, wherein the intelligent glasses comprise image data, character data and voice data.
The server accessed by the intelligent glasses system can be a centrally deployed server or an edge deployed distributed server, and the number of the servers is not limited. If the server is a distributed server, the server can be arranged at each position, the intelligent glasses can access the distributed server through a plurality of space induction modes such as GPS induction, network induction and radar induction, and the distributed server can be arranged in public spaces such as buses, shops, schools, hospitals, public institutions and enterprises.
In another embodiment, an augmented reality-based distributed server smart glasses system includes a plurality of distributed servers disposed at different locations and a plurality of network-accessible AR/MR/VR smart glasses. A control method for controlling a distributed server smart glasses system comprises the following steps:
s59, accessing the distributed server arranged in the space area by the AR/MR/VR intelligent glasses through a network or a GPS or a radar or an image;
s60, the accessed distributed server transmits the stored two/three-dimensional image/video data, audio data and text data to the accessed AR/MR/VR intelligent glasses through data communication;
and S61, the AR/MR/VR intelligent glasses visually present the received two/three-dimensional image/video data, audio data and character data through the imaging device.
The AR/MR/VR smart glasses accessing the distributed server arranged in the space area through the network may specifically be: the AR/MR/VR intelligent glasses complete access through a wireless local area network accessing a distributed server.
The AR/MR/VR smart glasses accessing the distributed server arranged in the space area through the GPS specifically may be:
s591, uploading the GPS information of the block to a cloud end by the distributed server;
s592, the cloud compares the GPS information uploaded by the AR/MR/VR intelligent glasses in real time with the GPS information uploaded by the distributed server to the block to which the distributed server belongs;
s593, if matching is completed during comparison, the cloud is connected with AR/MR/VR intelligent glasses to access the corresponding matched distributed server.
The device for acquiring the human operation through the equipment in the above embodiments may be an image sensor, a radar sensor, a touch sensor, a key sensor, a voice sensor, or the like that can acquire human behavior.
In the above embodiment, entering a certain scene is completed through manual selection, and the scene can also be automatically entered by identifying whether the scene is set in the area, and further, if multiple scenes are identified, the habit of the user is obtained through an algorithm to complete the scene selection and the scene is automatically entered.
The intelligent glasses protected by the invention can be single-function intelligent glasses with single function/single scene, and also can be multi-function intelligent glasses with multiple functions/multiple scenes, and the multi-function intelligent glasses can be two or more combinations of single function/single scene, including hardware combination and function combination.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. An MR/AR/VR shopping scene control method is characterized by comprising the following steps:
s1, switching or selecting an augmented reality interface entering a shopping scene through an interface on intelligent glasses;
s2, acquiring an image in a visual field through a sensing module of the intelligent glasses, identifying the image through an algorithm, and identifying the corresponding article type;
and S3, searching information related to the article types in the network database and the built-in database, and imaging the related information through an imaging device of the intelligent glasses.
2. The control method according to claim 1, wherein the correlation information modeling step is as follows:
s4, establishing a three-dimensional model of the article through the PC port, the mobile port and the intelligent glasses port, wherein the three-dimensional model includes but is not limited to the three-dimensional structure, physical characteristics and the like of the article;
and S5, setting and starting feature image information of the three-dimensional model through a port, and simultaneously setting specific gesture information for operating the three-dimensional model.
3. The control method according to claim 2, characterized in that the three-dimensional model control step is as follows:
s6, acquiring an image in a visual field through a sensing module of the intelligent glasses, and identifying the image through an algorithm to identify a corresponding article;
s7, presenting a three-dimensional model of a corresponding article through an imaging device of the intelligent glasses, acquiring an image in a visual field through a sensing module of the intelligent glasses, identifying whether characteristic image information or specific gesture information exists in the image, if the characteristic image information exists, completing size and state adjustment of the three-dimensional model according to set physical characteristics according to the characteristic image information, and superposing the adjusted three-dimensional model on the characteristic image through the imaging device to perform matching and fitting of key adaptation points to complete use of the three-dimensional model; and if the specific gesture information exists, adjusting the size, the shape, the color and the display position of the three-dimensional model according to the preset specific gesture.
4. The control method according to claim 1, wherein the screening of the relevant information before the imaging of the relevant information by the imaging device of the smart glasses in S3 is performed, the screening step includes:
s8, capturing eyeball information through a rear camera, acquiring an optical focus of an eyeball at the current moment, projecting the focus onto an image acquired by a sensing module, and determining whether the focus is in an identified object range;
and S9, if the focus is in the range of the identified certain object and the stay time of the focus exceeds a set threshold, displaying the associated information of the object through the imaging device, otherwise, not displaying the associated information of the related object.
5. The control method according to claim 1, wherein the imaging selection of the associated information comprises the steps of: the identification of whether the article is a three-dimensional object or a two-dimensional object in the real world is completed through the image information acquired by the sensing module, if the article is identified as the three-dimensional object, the article information is imaged, and if the article is identified as the two-dimensional object, the three-dimensional model is imaged.
6. The control method according to claim 1, wherein the imaging selection of the associated information comprises the steps of: and finishing article identification through the image information acquired by the sensing module, and preferentially imaging the three-dimensional model if the identified article is marked as an article which cannot be used in a contact manner by the system or is marked as an article of which the internal structure needs to be viewed by the system.
7. An MR/AR/VR retrieval scene control method is characterized by comprising the following steps:
s10, switching or selecting an augmented reality interface entering a retrieval scene through an interface on the intelligent glasses;
s11, acquiring an image in a visual field through a sensing module of the intelligent glasses, identifying the image through an algorithm, identifying characteristic points in the image, completing locking of a characteristic object, and uploading data of the characteristic object to a server through a communication module;
and S12, information related to the characteristic objects stored in the network database and the built-in database in the server is transmitted back to the intelligent glasses through the communication module, and imaging of the related information is performed through an imaging device of the intelligent glasses.
8. The control method according to claim 7, wherein the associated information includes text data, voice data, and image data uploaded by other users to the feature object, and text data, voice data, and image data uploaded by other ports to the feature object.
9.A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the control method according to any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the control method according to any one of claims 1 to 8.
CN202111460735.9A 2021-12-02 2021-12-02 MR/AR/VR shopping and retrieval scene control method, mobile terminal and readable storage medium Pending CN114119171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111460735.9A CN114119171A (en) 2021-12-02 2021-12-02 MR/AR/VR shopping and retrieval scene control method, mobile terminal and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111460735.9A CN114119171A (en) 2021-12-02 2021-12-02 MR/AR/VR shopping and retrieval scene control method, mobile terminal and readable storage medium

Publications (1)

Publication Number Publication Date
CN114119171A true CN114119171A (en) 2022-03-01

Family

ID=80366376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111460735.9A Pending CN114119171A (en) 2021-12-02 2021-12-02 MR/AR/VR shopping and retrieval scene control method, mobile terminal and readable storage medium

Country Status (1)

Country Link
CN (1) CN114119171A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115097903A (en) * 2022-05-19 2022-09-23 深圳智华科技发展有限公司 MR glasses control method and device, MR glasses and storage medium
CN115601672A (en) * 2022-12-14 2023-01-13 广州市玄武无线科技股份有限公司(Cn) VR intelligent shop patrol method and device based on deep learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115097903A (en) * 2022-05-19 2022-09-23 深圳智华科技发展有限公司 MR glasses control method and device, MR glasses and storage medium
CN115097903B (en) * 2022-05-19 2024-04-05 深圳智华科技发展有限公司 MR glasses control method and device, MR glasses and storage medium
CN115601672A (en) * 2022-12-14 2023-01-13 广州市玄武无线科技股份有限公司(Cn) VR intelligent shop patrol method and device based on deep learning

Similar Documents

Publication Publication Date Title
US11113524B2 (en) Schemes for retrieving and associating content items with real-world objects using augmented reality and object recognition
US9542778B1 (en) Systems and methods related to an interactive representative reality
TWI615776B (en) Method and system for creating virtual message onto a moving object and searching the same
CN107633441A (en) Commodity in track identification video image and the method and apparatus for showing merchandise news
TWI617930B (en) Method and system for sorting a search result with space objects, and a computer-readable storage device
US20200257121A1 (en) Information processing method, information processing terminal, and computer-readable non-transitory storage medium storing program
CN114119171A (en) MR/AR/VR shopping and retrieval scene control method, mobile terminal and readable storage medium
CN103686344A (en) Enhanced video system and method
TWI617931B (en) Method and system for remote management of location-based space object
CN110168615A (en) Information processing equipment, information processing method and program
CN112989214A (en) Tourism information display method and related equipment
CN113766296A (en) Live broadcast picture display method and device
TWI642002B (en) Method and system for managing viewability of location-based spatial object
WO2017104089A1 (en) Collaborative head-mounted display system, system including display device and head-mounted display, and display device
JP6318289B1 (en) Related information display system
KR20160012269A (en) Method and apparatus for providing ranking service of multimedia in a social network service system
TW201823929A (en) Method and system for remote management of virtual message for a moving object
KR101502984B1 (en) Method and apparatus for providing information of objects in contents and contents based on the object
CN114935972A (en) MR/AR/VR labeling and searching control method, mobile terminal and readable storage medium
KR20200060035A (en) Apparatus for providing vr video-based tour guide matching service and method thereof
KR20150093263A (en) Video producing service device based on private contents, video producing method based on private contents and computer readable medium having computer program recorded therefor
CN114153315A (en) Augmented reality distributed server intelligent glasses system and control method
CN114185433A (en) Intelligent glasses system based on augmented reality and control method
CN114153214B (en) MR/AR/VR message and creation scene control method, mobile terminal and readable storage medium
US20240095969A1 (en) Pose recommendation and real-time guidance for user-generated content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination