CN115981517A - VR multi-terminal collaborative interaction method and related equipment - Google Patents

VR multi-terminal collaborative interaction method and related equipment Download PDF

Info

Publication number
CN115981517A
CN115981517A CN202310281149.0A CN202310281149A CN115981517A CN 115981517 A CN115981517 A CN 115981517A CN 202310281149 A CN202310281149 A CN 202310281149A CN 115981517 A CN115981517 A CN 115981517A
Authority
CN
China
Prior art keywords
touch
real
information
panoramic
instruction content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310281149.0A
Other languages
Chinese (zh)
Other versions
CN115981517B (en
Inventor
余大飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tongchuang Blue Sky Cloud Technology Co ltd
Original Assignee
Beijing Tongchuang Blue Sky Cloud Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tongchuang Blue Sky Cloud Technology Co ltd filed Critical Beijing Tongchuang Blue Sky Cloud Technology Co ltd
Priority to CN202310281149.0A priority Critical patent/CN115981517B/en
Publication of CN115981517A publication Critical patent/CN115981517A/en
Application granted granted Critical
Publication of CN115981517B publication Critical patent/CN115981517B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application provides a VR multi-terminal collaborative interaction method and related equipment, and the problems of poor interactivity and high interaction difficulty of the current VR panoramic touch interface can be solved. Wherein, the method comprises the following steps: acquiring real interactive object information of a preset area range corresponding to a touch instruction under the condition that a target VR panoramic touch interface receives the touch instruction of a first user, wherein the VR panoramic touch interface is obtained by panoramic shooting based on a real scene; acquiring instruction content information based on a touch track corresponding to the touch instruction; and under the condition that the number of the instruction content information is at least two, determining target instruction content of the first user according to the identity information of the first user and the real interactive object information, and executing the target instruction content in the VR panoramic touch interface.

Description

VR multi-terminal collaborative interaction method and related equipment
Technical Field
The application relates to the technical field of computers, in particular to a VR multi-terminal collaborative interaction method and related equipment.
Background
With the great heat of the VR virtual reality technology, functions of house watching, car watching, exhibition watching, shopping and the like of the VR panorama begin to be put into use on each large platform, the functions almost become industrial standard allocation at a time, and the novel fashion in various marketing fields is led. The mode is convenient, the trip cost is saved for the client, the operation efficiency is improved for the enterprise, the experience of the client is improved, and the labor cost is also saved.
At present, in order to meet the requirements of customers and meet the real interaction between the customers and VR scenes, more interactive articles are arranged in the real scenes, but the operation difficulty of the users is increased.
Disclosure of Invention
The embodiment of the application provides a VR multi-terminal collaborative interaction method and related equipment, and can solve the problems of poor interactivity and high interaction difficulty of the existing VR panoramic touch interface.
A first aspect of an embodiment of the present application provides a VR multi-end collaborative interaction method, including:
acquiring real interactive object information of a preset area range corresponding to a touch instruction under the condition that a target VR panoramic touch interface receives the touch instruction of a first user, wherein the VR panoramic touch interface is obtained by panoramic shooting based on a real scene;
acquiring instruction content information based on a touch track corresponding to the touch instruction;
and under the condition that the number of the instruction content information is at least two, determining target instruction content of the first user according to the identity information of the first user and the real interactive object information, and executing the target instruction content in the VR panoramic touch interface.
Optionally, the determining, by the first user, the target instruction content according to the identity information of the first user and the real interactive object information when the instruction content information is at least two types includes:
determining the track type of the touch track under the condition that the number of the instruction content information is at least two;
determining the interaction type of the interactive object under the condition that the touch track is a sliding track and the interactive object exists in a preset area range corresponding to the touch instruction, wherein the sliding track is associated with the content of a visual angle adjusting instruction;
acquiring identity information of the first user under the condition that the interaction type of the real interactive object comprises rotation interaction and starting interaction matched with the sliding track;
and determining target instruction content in the view angle adjusting instruction content and the real interactive object instruction content based on the identity information of the first user.
Optionally, the determining, by the first user, the target instruction content according to the identity information of the first user and the real interactive object information when the instruction content information is at least two types includes:
determining the track type of the touch track under the condition that the number of the instruction content information is at least two;
determining the interaction type of the interactive object under the condition that the touch track is a click track and the interactive object exists in a preset area range corresponding to the touch instruction, wherein a step control related to the step instruction content exists in the preset area range corresponding to the click track;
acquiring identity information of the first user under the condition that the interaction type of the real interactive object comprises opening interaction matched with the click track;
and determining target instruction content in the step instruction content and the real interactive object instruction content based on the identity information of the first user.
Optionally, the identity information includes an identity type, the real interactive object is associated with at least one identity type, and the obtaining the identity information of the first user when the interaction type of the real interactive object includes a rotation interaction and an opening interaction that are matched with the sliding track includes:
under the condition that the interaction type of the real interaction object comprises rotation interaction and starting interaction matched with the sliding track, acquiring the first image information based on a front-facing imaging device to which a target VR panoramic touch interface belongs;
identity information of the first user is determined based on the first image information.
Optionally, the method further comprises:
under the condition that the target VR panoramic touch interface receives at least two touch instructions within a preset time range, acquiring second image information based on a front-end imaging device to which the target VR panoramic touch interface belongs, wherein the at least two touch instructions are different;
and preferentially executing the real interactive object instruction content under the condition that the second image information indicates that the at least two touch instructions come from different users.
Optionally, the method further includes:
under the condition that the target VR panoramic touch interface is a real house panoramic touch interface, acquiring position information of a corresponding real house in the target VR panoramic touch interface;
simulating the outdoor light brightness based on the position information of the real house and the current moment;
and generating the view foreground of the real lighting window on the target VR panoramic touch interface according to the light brightness.
Optionally, the method further includes:
under the condition that the target VR panoramic touch interface is a real house panoramic touch interface, acquiring position information of a corresponding real house in the target VR panoramic touch interface;
acquiring peripheral shelter information based on the position information of the real house;
simulating the incident angle and the intensity of outdoor light according to the position information, the height information, the peripheral shelter information and the current time of the real house;
and generating a view foreground of the real lighting window and an indoor perspective view foreground in the target VR panoramic touch interface according to the incidence angle and the intensity.
A second aspect of the embodiments of the present application provides a VR multi-end collaborative interaction apparatus, including:
the device comprises a receiving unit and a processing unit, wherein the receiving unit is used for acquiring real interactive object information of a preset area range corresponding to a touch instruction under the condition that a target VR panoramic touch interface receives the touch instruction of a first user, and the VR panoramic touch interface is obtained by panoramic shooting based on a real scene;
the acquisition unit is used for acquiring instruction content information based on a touch track corresponding to the touch instruction;
and the determining unit is used for determining target instruction content of the first user according to the identity information of the first user and the real interactive object information under the condition that the instruction content information is at least two, and executing the target instruction content in the VR panoramic touch interface.
A third aspect of the embodiments of the present application provides an electronic device, which includes a memory and a processor, where the processor is configured to implement the steps of the VR multi-terminal collaborative interaction method when executing a computer program stored in the memory.
A fourth aspect of the present embodiment provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the VR multi-terminal collaborative interaction method described above.
In summary, in the VR multi-end collaborative interaction method provided by the embodiment of the present application, when a target VR panorama touch interface receives a touch instruction of a first user, real interactive object information of a preset area range corresponding to the touch instruction is obtained, where the VR panorama touch interface is obtained by panorama shooting based on a real scene; acquiring instruction content information based on a touch track corresponding to the touch instruction; and under the condition that the number of the instruction content information is at least two, determining target instruction content of the first user according to the identity information of the first user and the real interactive object information, and executing the target instruction content in the VR panoramic touch interface. Because the VR panorama touch interface is mostly used to display a real scene, for example, in a home scene or a house renting and selling scene, a user can move, switch a viewing angle, and even interact with a real object or a person in the real scene through a screen of a display terminal such as a touch mobile phone, but since the various screen interaction behaviors do not generally set a specific operation button, operations such as indiscriminate sliding, clicking and the like need to be performed on the screen. When a real scene is subjected to touch operation such as switching an angle of view or moving a position, when a real interactive object exists in a touch area, or when interaction with the real interactive object is desired, when a touch trajectory of the interactive touch operation is similar to or overlaps with a touch trajectory of the touch operation such as switching the angle of view or moving the position, confusion of the operation is easily caused. In addition, the interest degree of different users for different real interactive objects is different, which also results in different operation possibilities of different users for different real interactive objects. According to the method, the real operation intention of the user can be predicted more accurately by utilizing the identity information of the user under the condition that the number of the instruction content information is at least two, so that the touch instruction of the user to a real scene is accurately executed, and the problems of poor interactivity and high interaction difficulty of the conventional VR panoramic touch interface are solved.
Accordingly, the VR multi-terminal collaborative interaction device, the electronic device and the computer-readable storage medium provided by the embodiment of the present invention also have the above technical effects.
Drawings
Fig. 1 is a schematic flowchart of a possible VR multi-end collaborative interaction method according to an embodiment of the present application;
fig. 2 is a schematic structural block diagram of a possible VR multi-end cooperative interaction apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic hardware structure diagram of a possible VR multi-end cooperative interaction apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural block diagram of a possible electronic device provided in an embodiment of the present application;
fig. 5 is a schematic structural block diagram of a possible computer-readable storage medium provided in an embodiment of the present application.
Detailed Description
The embodiment of the application provides a VR multi-terminal collaborative interaction method and related equipment, and the problems of poor interactivity and high interaction difficulty of the current VR panoramic touch interface can be solved.
The terms "first," "second," "third," "fourth," and the like in the description and claims of this application and in the above-described drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
Referring to fig. 1, a flowchart of a VR multi-end collaborative interaction method provided in an embodiment of the present application may specifically include: S110-S130.
S110, under the condition that a target VR panoramic touch interface receives a touch instruction of a first user, real interactive object information of a preset area range corresponding to the touch instruction is obtained, wherein the VR panoramic touch interface is obtained by panoramic shooting based on a real scene.
And S120, acquiring instruction content information based on the touch track corresponding to the touch instruction.
S130, under the condition that the number of the instruction content information is at least two, determining target instruction content of the first user according to the identity information of the first user and the real interactive object information, and executing the target instruction content in the VR panoramic touch interface.
For example, the user identity information may include information such as the age, sex, and occupation of the user. It can be understood that users with different age, gender and occupation properties have different sensitivity degrees for the same real interactive object.
According to the VR multi-terminal collaborative interaction method provided by the embodiment, under the condition that a target VR panoramic touch interface receives a touch instruction of a first user, real interactive object information of a preset area range corresponding to the touch instruction is obtained, wherein the VR panoramic touch interface is obtained by panoramic shooting based on a real scene; acquiring instruction content information based on a touch track corresponding to the touch instruction; and under the condition that the number of the instruction content information is at least two, determining target instruction content of the first user according to the identity information of the first user and the real interactive object information, and executing the target instruction content in the VR panoramic touch interface. Because the VR panorama touch interface is mostly used to display a real scene, for example, in a home scene or a house renting and selling scene, a user can move, switch a viewing angle, and even interact with a real object or a person in the real scene through a screen of a display terminal such as a touch mobile phone, but since the various screen interaction behaviors do not generally set a specific operation button, operations such as indiscriminate sliding, clicking and the like need to be performed on the screen. When a real scene is subjected to touch operation such as switching an angle of view or moving a position, when a real interactive object exists in a touch area, or when interaction with the real interactive object is desired, when a touch trajectory of the interactive touch operation is similar to or overlaps with a touch trajectory of the touch operation such as switching the angle of view or moving the position, confusion of the operation is easily caused. In addition, the interest degree of different users for different real interactive objects is different, which also results in different operation possibilities of different users for different real interactive objects. According to the method, the real operation intention of the user can be predicted more accurately by utilizing the identity information of the user under the condition that the number of the instruction content information is at least two, so that the touch instruction of the user to a real scene is accurately executed, and the problems of poor interactivity and high interaction difficulty of the conventional VR panoramic touch interface are solved.
According to some embodiments, the determining the target instruction content of the first user according to the identity information of the first user and the real interactive object information in the case that the instruction content information is at least two includes:
determining the track type of the touch track under the condition that the number of the instruction content information is at least two;
determining the interaction type of the interactive object under the condition that the touch track is a sliding track and the interactive object exists in a preset area range corresponding to the touch instruction, wherein the sliding track is associated with the content of a visual angle adjusting instruction;
acquiring identity information of the first user under the condition that the interaction type of the real interactive object comprises rotation interaction and starting interaction matched with the sliding track;
and determining target instruction content in the view angle adjusting instruction content and the real interactive object instruction content based on the identity information of the first user.
Illustratively, there is a correspondence between the interactive object and the user identity information. For example, the interactive object is a real toy in a scene, and the interaction type of the toy includes sliding to turn or open. If the acquired identity information of the first user is adult, the content of the view angle adjustment instruction is determined to be the target instruction content. The method and the device avoid the situation that a user who actually wants to adjust the visual angle needs to wait for a period of time to continue operating because the user needs to execute a fixed picture after entering the interaction of the interactive object.
According to some embodiments, the determining the target instruction content of the first user according to the identity information of the first user and the real interactive object information in the case that the instruction content information is at least two includes:
determining the track type of the touch track under the condition that the number of the instruction content information is at least two;
determining the interaction type of the interactive object under the condition that the touch track is a click track and the interactive object exists in a preset area range corresponding to the touch instruction, wherein a step control related to the step instruction content exists in the preset area range corresponding to the click track;
acquiring identity information of the first user under the condition that the interaction type of the real interaction object comprises opening interaction matched with the click track;
and determining target instruction content in the walking instruction content and the real interactive object instruction content based on the identity information of the first user.
According to some embodiments, the identity information includes an identity type, the real interactive object is associated with at least one identity type, and the obtaining the identity information of the first user in the case that the interaction type of the real interactive object includes a turning interaction and an opening interaction matched with the sliding track includes:
under the condition that the interaction type of the real interaction object comprises rotation interaction and opening interaction matched with the sliding track, acquiring the first image information based on a front-facing imaging device to which a target VR panoramic touch interface belongs;
identity information of the first user is determined based on the first image information.
For example, the identity information of the first user may be extracted from information pre-stored by the user after the user actively reports or logs in. In some situations where a user is not required to log in or where the logged-in user may be different from the user, identity information of the first user may be determined based on the first image information.
According to some embodiments, further comprising:
under the condition that the target VR panoramic touch interface receives at least two touch instructions within a preset time range, acquiring second image information based on a front-end imaging device to which the target VR panoramic touch interface belongs, wherein the at least two touch instructions are different;
and preferentially executing the instruction content of the real interactive object under the condition that the second image information indicates that the at least two touch instructions come from different users.
For example, in the case that a plurality of users perform touch operations on a target VR panorama touch interface on a screen of the same terminal, if the at least two touch instructions are different and the second image information indicates that the at least two touch instructions are from different users, the real interactive object instruction content is preferentially executed, so that it is possible to avoid that a user who wants to interact with the real interactive object misses or cannot perform the interactive operation of the real interactive object any more due to the execution of the view angle switching or step instruction. Of course, if the two touch commands include the view switching or stepping command, it is also possible to ensure that the real interactive object command can be executed and displayed by reducing the view switching angle or shortening the distance of the stepping command, and the view switching or stepping command is executed.
For example, when a plurality of users perform touch operation on a target VR panoramic touch interface on a screen of the same terminal, the type of each user can be analyzed by the front-end imaging device, attention positions of different users in the screen can be recognized by eyeballs, and the corresponding relation between a touch instruction and the type of the user can be determined.
According to some embodiments, further comprising:
under the condition that the target VR panoramic touch interface is a real house panoramic touch interface, acquiring position information of a corresponding real house in the target VR panoramic touch interface;
simulating the outdoor light brightness based on the position information of the real house and the current moment;
and generating a view foreground of the real lighting window on the target VR panoramic touch interface according to the light brightness.
For example, the position information of the corresponding real house in the target VR panorama touch interface is obtained, and the outdoor illumination condition may be determined based on the position information of the real house and the current time. By simulating outdoor light brightness and generating the view foreground of the real lighting window on the target VR panoramic touch interface according to the light brightness, the user can see a house more truly.
According to some embodiments, further comprising:
under the condition that the target VR panoramic touch interface is a real house panoramic touch interface, acquiring position information of a corresponding real house in the target VR panoramic touch interface;
acquiring peripheral shelter information based on the position information of the real house;
simulating the incidence angle and the intensity of outdoor light according to the position information, the height information, the peripheral shelter information and the current moment of the real house;
and generating a view foreground of the real lighting window and an indoor perspective view foreground in the target VR panoramic touch interface according to the incidence angle and the intensity.
Illustratively, the incident angles and the intensities of outdoor light rays from different lighting windows of a house can be simulated according to the position information, the height information, the peripheral shelter information and the current time of the real house, and more real house-watching experience can be provided for users. In addition, the incidence angles and the intensities of outdoor light from different lighting windows of a house at different moments in a day can be demonstrated in a short time according to the time compression ratio.
The VR multi-peer collaborative interaction method in the embodiment of the present application is described above, and the VR multi-peer collaborative interaction device in the embodiment of the present application is described below.
Referring to fig. 2, an embodiment of a VR multi-end collaborative interaction apparatus is described in this embodiment of the present application, which may include:
the processing device comprises a receiving unit 201, configured to acquire, when a target VR panorama touch interface receives a touch instruction of a first user, real interactive object information of a preset area range corresponding to the touch instruction, where the VR panorama touch interface is obtained by performing panorama shooting based on a real scene;
an obtaining unit 202, configured to obtain instruction content information based on a touch trajectory corresponding to the touch instruction;
a determining unit 203, configured to determine, when there are at least two pieces of instruction content information, target instruction content of the first user according to the identity information of the first user and the real interactive object information, and execute the target instruction content in the VR panorama touch interface.
According to the VR multi-terminal cooperative interaction device provided by the embodiment, under the condition that a target VR panoramic touch interface receives a touch instruction of a first user, real interactive object information of a preset area range corresponding to the touch instruction is obtained, wherein the VR panoramic touch interface is obtained by panoramic shooting based on a real scene; acquiring instruction content information based on a touch track corresponding to the touch instruction; and under the condition that the number of the instruction content information is at least two, determining target instruction content of the first user according to the identity information of the first user and the real interactive object information, and executing the target instruction content in the VR panoramic touch interface. Because the VR panorama touch interface is mostly used to display a real scene, for example, in a home scene or a house renting and selling scene, a user can move, switch a viewing angle, and even interact with a real object or a person in the real scene through a screen of a display terminal such as a touch mobile phone, but since the various screen interaction behaviors do not generally set a specific operation button, operations such as indiscriminate sliding, clicking and the like need to be performed on the screen. When a real scene is subjected to touch operation such as switching an angle of view or moving a position, when a real interactive object exists in a touch area, or when interaction with the real interactive object is desired, when a touch trajectory of the interactive touch operation is similar to or overlaps with a touch trajectory of the touch operation such as switching the angle of view or moving the position, confusion of the operation is easily caused. In addition, the interest degree of different users for different real interactive objects is different, which also results in different operation possibilities of different users for different real interactive objects. According to the method, the real operation intention of the user can be predicted more accurately by utilizing the identity information of the user under the condition that the instruction content information is at least two, so that the touch instruction of the user on a real scene is accurately executed, and the problems of poor interactivity and high interaction difficulty of the conventional VR panoramic touch interface are solved.
With reference to fig. 3, fig. 2 describes the VR multi-end cooperative interaction apparatus in the embodiment of the present application from the perspective of a modular functional entity, and the following describes the VR multi-end cooperative interaction apparatus in the embodiment of the present application in detail from the perspective of hardware processing, where an embodiment of a VR multi-end cooperative interaction apparatus 300 in the embodiment of the present application includes:
an input device 301, an output device 302, a processor 303 and a memory 304, wherein the number of the processor 303 may be one or more, and one processor 303 is taken as an example in fig. 3. In some embodiments of the present application, the input device 301, the output device 302, the processor 303 and the memory 304 may be connected by a bus or other means, wherein fig. 3 illustrates the connection by the bus.
Wherein, by calling the operation instruction stored in the memory 304, the processor 303 is configured to perform the following steps:
acquiring real interactive object information of a preset area range corresponding to a touch instruction under the condition that a target VR panoramic touch interface receives the touch instruction of a first user, wherein the VR panoramic touch interface is obtained by panoramic shooting based on a real scene;
acquiring instruction content information based on a touch track corresponding to the touch instruction;
and under the condition that the number of the instruction content information is at least two, determining target instruction content of the first user according to the identity information of the first user and the real interactive object information, and executing the target instruction content in the VR panoramic touch interface.
Optionally, the determining, by the first user, the target instruction content according to the identity information of the first user and the real interactive object information when the instruction content information is at least two types includes:
determining the track type of the touch track under the condition that the number of the instruction content information is at least two;
determining the interaction type of the interactive object under the condition that the touch track is a sliding track and the interactive object exists in a preset area range corresponding to the touch instruction, wherein the sliding track is associated with the content of a visual angle adjusting instruction;
acquiring identity information of the first user under the condition that the interaction type of the real interactive object comprises rotation interaction and starting interaction matched with the sliding track;
and determining target instruction content in the view angle adjusting instruction content and the real interactive object instruction content based on the identity information of the first user.
Optionally, the determining, by the first user, the target instruction content according to the identity information of the first user and the real interactive object information when the instruction content information is at least two types includes:
determining the track type of the touch track under the condition that the number of the instruction content information is at least two;
determining the interaction type of the interactive object under the condition that the touch track is a click track and the interactive object exists in a preset area range corresponding to the touch instruction, wherein a step control related to the step instruction content exists in the preset area range corresponding to the click track;
acquiring identity information of the first user under the condition that the interaction type of the real interactive object comprises opening interaction matched with the click track;
and determining target instruction content in the step instruction content and the real interactive object instruction content based on the identity information of the first user.
Optionally, the identity information includes an identity type, the real interactive object is associated with at least one identity type, and the obtaining the identity information of the first user when the interaction type of the real interactive object includes a rotation interaction and an opening interaction that are matched with the sliding track includes:
under the condition that the interaction type of the real interaction object comprises rotation interaction and starting interaction matched with the sliding track, acquiring the first image information based on a front-facing imaging device to which a target VR panoramic touch interface belongs;
identity information of the first user is determined based on the first image information.
Optionally, the method further comprises:
under the condition that the target VR panoramic touch interface receives at least two touch instructions within a preset time range, acquiring second image information based on a front-end imaging device to which the target VR panoramic touch interface belongs, wherein the at least two touch instructions are different;
and preferentially executing the real interactive object instruction content under the condition that the second image information indicates that the at least two touch instructions come from different users.
Optionally, the method further includes:
under the condition that the target VR panoramic touch interface is a real house panoramic interface, acquiring position information of a corresponding real house in the target VR panoramic touch interface;
simulating the outdoor light brightness based on the position information of the real house and the current moment;
and generating a view foreground of the real lighting window on the target VR panoramic touch interface according to the light brightness.
Optionally, the method further includes:
under the condition that the target VR panoramic touch interface is a real house panoramic touch interface, acquiring position information of a corresponding real house in the target VR panoramic touch interface;
acquiring peripheral shelter information based on the position information of the real house;
simulating the incident angle and the intensity of outdoor light according to the position information, the height information, the peripheral shelter information and the current time of the real house;
and generating a view foreground of the real lighting window and an indoor perspective view foreground in the target VR panoramic touch interface according to the incidence angle and the intensity.
The processor 303 is also configured to perform any of the methods in the corresponding embodiments of fig. 1 by calling the operation instructions stored in the memory 304.
Referring to fig. 4, fig. 4 is a schematic view of an embodiment of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 4, an electronic device 400 according to an embodiment of the present application includes a memory 410, a processor 420, and a computer program 411 stored in the memory 410 and executable on the processor 420, where the processor 420 executes the computer program 411 to implement the following steps:
acquiring real interactive object information of a preset area range corresponding to a touch instruction under the condition that a target VR panoramic touch interface receives the touch instruction of a first user, wherein the VR panoramic touch interface is obtained by panoramic shooting based on a real scene;
acquiring instruction content information based on a touch track corresponding to the touch instruction;
and under the condition that the number of the instruction content information is at least two, determining target instruction content of the first user according to the identity information of the first user and the real interactive object information, and executing the target instruction content in the VR panoramic touch interface.
Optionally, the determining, by the first user, the target instruction content according to the identity information of the first user and the real interactive object information when the instruction content information is at least two types includes:
determining the track type of the touch track under the condition that the number of the instruction content information is at least two;
determining the interaction type of the interactive object under the condition that the touch track is a sliding track and the interactive object exists in a preset area range corresponding to the touch instruction, wherein the sliding track is associated with the content of a visual angle adjusting instruction;
acquiring identity information of the first user under the condition that the interaction type of the real interactive object comprises rotation interaction and starting interaction matched with the sliding track;
and determining target instruction content in the view angle adjusting instruction content and the real interactive object instruction content based on the identity information of the first user.
Optionally, the determining, by the first user, the target instruction content according to the identity information of the first user and the real interactive object information when the instruction content information is at least two types includes:
determining the track type of the touch track under the condition that the number of the instruction content information is at least two;
determining the interaction type of the interactive object under the condition that the touch track is a click track and the interactive object exists in a preset area range corresponding to the touch instruction, wherein a step control related to the step instruction content exists in the preset area range corresponding to the click track;
acquiring identity information of the first user under the condition that the interaction type of the real interaction object comprises opening interaction matched with the click track;
and determining target instruction content in the step instruction content and the real interactive object instruction content based on the identity information of the first user.
Optionally, the identity information includes an identity type, the real interactive object is associated with at least one identity type, and the obtaining the identity information of the first user when the interaction type of the real interactive object includes a rotation interaction and an opening interaction that are matched with the sliding track includes:
under the condition that the interaction type of the real interaction object comprises rotation interaction and starting interaction matched with the sliding track, acquiring the first image information based on a front-facing imaging device to which a target VR panoramic touch interface belongs;
identity information of the first user is determined based on the first image information.
Optionally, the method further comprises:
under the condition that the target VR panoramic touch interface receives at least two touch instructions within a preset time range, acquiring second image information based on a front-end imaging device to which the target VR panoramic touch interface belongs, wherein the at least two touch instructions are different;
and preferentially executing the real interactive object instruction content under the condition that the second image information indicates that the at least two touch instructions come from different users.
Optionally, the method further includes:
under the condition that the target VR panoramic touch interface is a real house panoramic touch interface, acquiring position information of a corresponding real house in the target VR panoramic touch interface;
simulating the outdoor light brightness based on the position information of the real house and the current moment;
and generating a view foreground of the real lighting window on the target VR panoramic touch interface according to the light brightness.
Optionally, the method further includes:
under the condition that the target VR panoramic touch interface is a real house panoramic interface, acquiring position information of a corresponding real house in the target VR panoramic touch interface;
acquiring peripheral shelter information based on the position information of the real house;
simulating the incidence angle and the intensity of outdoor light according to the position information, the height information, the peripheral shelter information and the current moment of the real house;
and generating a view foreground of the real lighting window and an indoor perspective view foreground in the target VR panoramic touch interface according to the incidence angle and the intensity.
In a specific implementation, when the processor 420 executes the computer program 411, any of the embodiments corresponding to fig. 1 may be implemented.
Since the electronic device described in this embodiment is a device used for implementing a system resource management apparatus in this embodiment, based on the method described in this embodiment, a person skilled in the art can understand a specific implementation manner of the electronic device of this embodiment and various variations thereof, so that how to implement the method in this embodiment by the electronic device is not described in detail herein, and as long as the person skilled in the art implements the device used for implementing the method in this embodiment, the device is within the scope of protection intended by this application.
Referring to fig. 5, fig. 5 is a schematic diagram of an embodiment of a computer-readable storage medium according to an embodiment of the present application.
As shown in fig. 5, the present embodiment provides a computer-readable storage medium 500 having a computer program 511 stored thereon, the computer program 511 implementing the following steps when executed by a processor:
acquiring real interactive object information of a preset area range corresponding to a touch instruction under the condition that a target VR panoramic touch interface receives the touch instruction of a first user, wherein the VR panoramic touch interface is obtained by panoramic shooting based on a real scene;
acquiring instruction content information based on a touch track corresponding to the touch instruction;
and under the condition that the number of the instruction content information is at least two, determining target instruction content of the first user according to the identity information of the first user and the real interactive object information, and executing the target instruction content in the VR panoramic touch interface.
Optionally, the determining, by the first user, the target instruction content according to the identity information of the first user and the real interactive object information when the instruction content information is at least two types includes:
determining the track type of the touch track under the condition that the number of the instruction content information is at least two;
determining the interaction type of the interactive object under the condition that the touch track is a sliding track and the interactive object exists in a preset area range corresponding to the touch instruction, wherein the sliding track is associated with the content of a visual angle adjusting instruction;
acquiring identity information of the first user under the condition that the interaction type of the real interactive object comprises rotation interaction and starting interaction matched with the sliding track;
and determining target instruction content in the view angle adjusting instruction content and the real interactive object instruction content based on the identity information of the first user.
Optionally, the determining, by the first user, the target instruction content according to the identity information of the first user and the actual interactive object information includes determining, by the first user, a type of the interactive object, and when the instruction content information includes at least two types of interactive objects, and includes:
determining the track type of the touch track under the condition that the number of the instruction content information is at least two;
determining the interaction type of the interactive object under the condition that the touch track is a click track and the interactive object exists in a preset area range corresponding to the touch instruction, wherein a step control related to the step instruction content exists in the preset area range corresponding to the click track;
acquiring identity information of the first user under the condition that the interaction type of the real interactive object comprises opening interaction matched with the click track;
and determining target instruction content in the step instruction content and the real interactive object instruction content based on the identity information of the first user.
Optionally, the identity information includes an identity type, the real interactive object is associated with at least one identity type, and the obtaining the identity information of the first user when the interaction type of the real interactive object includes a rotation interaction and an opening interaction that are matched with the sliding track includes:
under the condition that the interaction type of the real interaction object comprises rotation interaction and starting interaction matched with the sliding track, acquiring the first image information based on a front-facing imaging device to which a target VR panoramic touch interface belongs;
identity information of the first user is determined based on the first image information.
Optionally, the method further comprises:
under the condition that the target VR panoramic touch interface receives at least two touch instructions within a preset time range, acquiring second image information based on a front-end imaging device to which the target VR panoramic touch interface belongs, wherein the at least two touch instructions are different;
and preferentially executing the instruction content of the real interactive object under the condition that the second image information indicates that the at least two touch instructions come from different users.
Optionally, the method further includes:
under the condition that the target VR panoramic touch interface is a real house panoramic touch interface, acquiring position information of a corresponding real house in the target VR panoramic touch interface;
simulating the outdoor light brightness based on the position information of the real house and the current moment;
and generating a view foreground of the real lighting window on the target VR panoramic touch interface according to the light brightness.
Optionally, the method further includes:
under the condition that the target VR panoramic touch interface is a real house panoramic touch interface, acquiring position information of a corresponding real house in the target VR panoramic touch interface;
acquiring peripheral shelter information based on the position information of the real house;
simulating the incident angle and the intensity of outdoor light according to the position information, the height information, the peripheral shelter information and the current time of the real house;
and generating a view foreground of the real lighting window and an indoor perspective view foreground in the target VR panoramic touch interface according to the incidence angle and the intensity.
In a specific implementation, the computer program 511 may implement any of the embodiments corresponding to fig. 1 when executed by a processor.
It should be noted that, in the foregoing embodiments, the description of each embodiment has an emphasis, and reference may be made to the related description of other embodiments for a part that is not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
An embodiment of the present application further provides a computer program product, where the computer program product includes computer software instructions, and when the computer software instructions are run on a processing device, the processing device is caused to execute a flow in the VR multi-end collaborative interaction method in the embodiment corresponding to fig. 1.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A VR multi-terminal collaborative interaction method is characterized by comprising the following steps:
acquiring real interactive object information of a preset area range corresponding to a touch instruction under the condition that a target VR panoramic touch interface receives the touch instruction of a first user, wherein the VR panoramic touch interface is obtained by panoramic shooting based on a real scene;
acquiring instruction content information based on a touch track corresponding to the touch instruction;
and under the condition that the number of the instruction content information is at least two, determining target instruction content of the first user according to the identity information of the first user and the real interactive object information, and executing the target instruction content in the VR panoramic touch interface.
2. The method according to claim 1, wherein the real interactive object information includes interactive object types, and in the case that the instruction content information is at least two, determining the target instruction content of the first user according to the identity information of the first user and the real interactive object information comprises:
determining the track type of the touch track under the condition that the number of the instruction content information is at least two;
determining the interaction type of the interactive object under the condition that the touch track is a sliding track and the interactive object exists in a preset area range corresponding to the touch instruction, wherein the sliding track is associated with the content of a visual angle adjusting instruction;
acquiring identity information of the first user under the condition that the interaction type of the real interactive object comprises rotation interaction and starting interaction matched with the sliding track;
and determining target instruction content in the view angle adjusting instruction content and the real interactive object instruction content based on the identity information of the first user.
3. The method according to claim 1, wherein the real interactive object information includes interactive object types, and in the case that the instruction content information is at least two, determining the target instruction content of the first user according to the identity information of the first user and the real interactive object information comprises:
determining the track type of the touch track under the condition that the number of the instruction content information is at least two;
determining the interaction type of the interactive object under the condition that the touch track is a click track and the interactive object exists in a preset area range corresponding to the touch instruction, wherein a step control related to the step instruction content exists in the preset area range corresponding to the click track;
acquiring identity information of the first user under the condition that the interaction type of the real interactive object comprises opening interaction matched with the click track;
and determining target instruction content in the step instruction content and the real interactive object instruction content based on the identity information of the first user.
4. The method according to claim 2, wherein the identity information comprises identity types, at least one identity type is associated with the real interactive object, and the obtaining the identity information of the first user in the case that the interaction type of the real interactive object comprises a turning interaction and a starting interaction matched with the sliding track comprises:
under the condition that the interaction type of the real interaction object comprises rotation interaction and starting interaction matched with the sliding track, acquiring first image information based on a front-facing imaging device to which a target VR panoramic touch interface belongs;
identity information of the first user is determined based on the first image information.
5. The method of claim 1, further comprising:
under the condition that the target VR panoramic touch interface receives at least two touch instructions within a preset time range, acquiring second image information based on a front-end imaging device to which the target VR panoramic touch interface belongs, wherein the at least two touch instructions are different;
and preferentially executing the real interactive object instruction content under the condition that the second image information indicates that the at least two touch instructions come from different users.
6. The method of claim 1, further comprising:
under the condition that the target VR panoramic touch interface is a real house panoramic touch interface, acquiring position information of a corresponding real house in the target VR panoramic touch interface;
simulating the outdoor light brightness based on the position information of the real house and the current moment;
and generating a view foreground of the real lighting window on the target VR panoramic touch interface according to the light brightness.
7. The method of claim 1, further comprising:
under the condition that the target VR panoramic touch interface is a real house panoramic touch interface, acquiring position information of a corresponding real house in the target VR panoramic touch interface;
acquiring peripheral obstruction information based on the position information of the real house;
simulating the incident angle and the intensity of outdoor light according to the position information, the height information, the peripheral shelter information and the current time of the real house;
and generating a view foreground of the real lighting window and an indoor perspective view foreground in the target VR panoramic touch interface according to the incidence angle and the intensity.
8. A VR multi-end collaborative interaction apparatus, comprising:
the device comprises a receiving unit and a processing unit, wherein the receiving unit is used for acquiring real interactive object information of a preset area range corresponding to a touch instruction under the condition that a target VR panoramic touch interface receives the touch instruction of a first user, and the VR panoramic touch interface is obtained by panoramic shooting based on a real scene;
the acquisition unit is used for acquiring instruction content information based on a touch track corresponding to the touch instruction;
and the determining unit is used for determining target instruction content of the first user according to the identity information of the first user and the real interactive object information under the condition that the instruction content information is at least two, and executing the target instruction content in the VR panoramic touch interface.
9. An electronic device comprising a memory, a processor, wherein the processor is configured to implement the steps of the VR multi-terminal collaborative interaction method of any of claims 1 to 7 when executing a computer program stored in the memory.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program realizing the steps of the VR multi-ended collaborative interaction method of any one of claims 1 to 7 when executed by a processor.
CN202310281149.0A 2023-03-22 2023-03-22 VR multi-terminal cooperative interaction method and related equipment Active CN115981517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310281149.0A CN115981517B (en) 2023-03-22 2023-03-22 VR multi-terminal cooperative interaction method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310281149.0A CN115981517B (en) 2023-03-22 2023-03-22 VR multi-terminal cooperative interaction method and related equipment

Publications (2)

Publication Number Publication Date
CN115981517A true CN115981517A (en) 2023-04-18
CN115981517B CN115981517B (en) 2023-06-02

Family

ID=85970576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310281149.0A Active CN115981517B (en) 2023-03-22 2023-03-22 VR multi-terminal cooperative interaction method and related equipment

Country Status (1)

Country Link
CN (1) CN115981517B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108027653A (en) * 2015-09-09 2018-05-11 微软技术许可有限责任公司 haptic interaction in virtual environment
US20190212815A1 (en) * 2018-01-10 2019-07-11 Samsung Electronics Co., Ltd. Method and apparatus to determine trigger intent of user
US20190378330A1 (en) * 2018-06-06 2019-12-12 Ke.Com (Beijing) Technology Co., Ltd. Method for data collection and model generation of house
CN112015271A (en) * 2020-03-10 2020-12-01 简吉波 Virtual reality control method and device based on cloud platform and virtual reality equipment
CN112631429A (en) * 2020-12-28 2021-04-09 天翼阅读文化传播有限公司 Gaze point voice interaction device and method in virtual reality scene
CN113791687A (en) * 2021-09-15 2021-12-14 咪咕视讯科技有限公司 Interaction method and device in VR scene, computing equipment and storage medium
CN113885345A (en) * 2021-10-29 2022-01-04 广州市技师学院(广州市高级技工学校、广州市高级职业技术培训学院、广州市农业干部学校) Interaction method, device and equipment based on intelligent home simulation control system
CN115657846A (en) * 2022-10-20 2023-01-31 苏州数孪数字科技有限公司 Interaction method and system based on VR digital content
CN115801943A (en) * 2021-09-08 2023-03-14 华为技术有限公司 Display method, electronic device, and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108027653A (en) * 2015-09-09 2018-05-11 微软技术许可有限责任公司 haptic interaction in virtual environment
US20190212815A1 (en) * 2018-01-10 2019-07-11 Samsung Electronics Co., Ltd. Method and apparatus to determine trigger intent of user
US20190378330A1 (en) * 2018-06-06 2019-12-12 Ke.Com (Beijing) Technology Co., Ltd. Method for data collection and model generation of house
CN112015271A (en) * 2020-03-10 2020-12-01 简吉波 Virtual reality control method and device based on cloud platform and virtual reality equipment
CN112631429A (en) * 2020-12-28 2021-04-09 天翼阅读文化传播有限公司 Gaze point voice interaction device and method in virtual reality scene
CN115801943A (en) * 2021-09-08 2023-03-14 华为技术有限公司 Display method, electronic device, and storage medium
CN113791687A (en) * 2021-09-15 2021-12-14 咪咕视讯科技有限公司 Interaction method and device in VR scene, computing equipment and storage medium
CN113885345A (en) * 2021-10-29 2022-01-04 广州市技师学院(广州市高级技工学校、广州市高级职业技术培训学院、广州市农业干部学校) Interaction method, device and equipment based on intelligent home simulation control system
CN115657846A (en) * 2022-10-20 2023-01-31 苏州数孪数字科技有限公司 Interaction method and system based on VR digital content

Also Published As

Publication number Publication date
CN115981517B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN106060419B (en) A kind of photographic method and mobile terminal
CN110703913B (en) Object interaction method and device, storage medium and electronic device
CN111556278A (en) Video processing method, video display device and storage medium
KR20180005689A (en) Information processing method, terminal and computer storage medium
CN106326678A (en) Sample room experiencing method, equipment and system based on virtual reality
CN108829468B (en) Three-dimensional space model skipping processing method and device
TW202304212A (en) Live broadcast method, system, computer equipment and computer readable storage medium
CN105912116A (en) Intelligent projection method and projector
CN112752132A (en) Cartoon picture bullet screen display method and device, medium and electronic equipment
CN113596574A (en) Video processing method, video processing apparatus, electronic device, and readable storage medium
CN107346197B (en) Information display method and device
CN113269781A (en) Data generation method and device and electronic equipment
CN115981517A (en) VR multi-terminal collaborative interaction method and related equipment
CN112527170A (en) Equipment visualization control method and device and computer readable storage medium
CN108958690B (en) Multi-screen interaction method and device, terminal equipment, server and storage medium
CN109992178B (en) Control method and device of target component, storage medium and electronic device
CN115981518B (en) VR demonstration user operation method and related equipment
CN112399265B (en) Method and system for adding content to image based on negative space recognition
CN115407916A (en) Interface display method and device, electronic equipment and storage medium
CN113989427A (en) Illumination simulation method and device, electronic equipment and storage medium
CN109995988A (en) A kind of control method and device for robot of taking pictures
WO2020248682A1 (en) Display device and virtual scene generation method
CN111696193B (en) Internet of things control method, system and device based on three-dimensional scene and storage medium
CN110730222B (en) Remote camera shooting presentation method
CN113810624A (en) Video generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant