CN112150223A - Method, system and terminal for checking article information and article label in VR model - Google Patents

Method, system and terminal for checking article information and article label in VR model Download PDF

Info

Publication number
CN112150223A
CN112150223A CN201910570127.XA CN201910570127A CN112150223A CN 112150223 A CN112150223 A CN 112150223A CN 201910570127 A CN201910570127 A CN 201910570127A CN 112150223 A CN112150223 A CN 112150223A
Authority
CN
China
Prior art keywords
information
point
model
observation
observation picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910570127.XA
Other languages
Chinese (zh)
Inventor
杨彬
胡亦朗
朱毅
杨钰柯
王怡丁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
You Can See Beijing Technology Co ltd AS
Original Assignee
Beike Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beike Technology Co Ltd filed Critical Beike Technology Co Ltd
Priority to CN201910570127.XA priority Critical patent/CN112150223A/en
Priority to JP2021576083A priority patent/JP7287509B2/en
Priority to AU2020304463A priority patent/AU2020304463B2/en
Priority to PCT/CN2020/098593 priority patent/WO2020259694A1/en
Priority to CA3145342A priority patent/CA3145342A1/en
Priority to US17/095,702 priority patent/US11120618B2/en
Publication of CN112150223A publication Critical patent/CN112150223A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides an indoor article information viewing method based on VR, which comprises the following steps: determining position information and visual angle information of virtual observation point information in the VR model, and determining a displayed observation picture based on the position information and the visual angle information; determining visible articles in the observation picture according to the observation picture; receiving a point touch instruction of a user for any visible article in the observation picture; and providing introduction information of the corresponding item in response to the touch instruction. Can carry out VR at the user and see the house in-process, can provide the introductory information of corresponding article in response to the point touch of user to arbitrary visible article in the observation picture to make the user totally understand the simulation fitment scheme of looking over, can specifically understand indoor each detailed introductory information of putting article, avoided the user to see the house through VR and only know indoor locating position information and can't understand the problem of putting article information.

Description

Method, system and terminal for checking article information and article label in VR model
Technical Field
The invention relates to the technical field of VR (virtual reality), in particular to a method, a system and a terminal for checking article information and article labels in a VR model.
Background
Along with the development of VR technique, not only so with the house utilization in the actual physical space, the user can also see the room through the VR and select the simulation fitment scheme in room, can realize the simulation of house decoration effect in the VR scene, however the user is still unknown to the detailed information that the room put the article in the simulation fitment scheme, for example, a 65 cun TV is put in this simulation fitment scheme in the guest room, but the user can't learn the detailed parameter and the market price information of this TV in the VR scene yet, therefore the user can't see the room through the VR and knows the detailed information that the room put the article in this simulation fitment scheme.
Disclosure of Invention
The embodiment of the invention aims to provide a method, a system and a terminal for checking article information and article labels in a VR model, wherein the method for checking the article information in the VR model can be used for determining a displayed observation picture based on position information and visual angle information by determining the position information and the visual angle information of a virtual observation point in the VR model in the process of checking a house by a user; determining visible articles in the observation picture according to the observation picture; the introduction information of corresponding articles is provided in response to the point touch of the user on any visible article in the observation picture, so that the user can completely know the checked simulation decoration scheme, specifically know the detailed introduction information of each indoor placed article, and avoid the problem that the user can not know the information of the placed articles by only knowing the information of indoor placed positions when watching a house through VR.
In order to achieve the above object, an embodiment of the present invention provides a method for viewing information of an item in a VR model, where the method includes:
determining position information and view angle information of a virtual observation point in the VR model, and determining a displayed observation picture based on the position information and the view angle information;
determining visible articles in the observation picture according to the observation picture;
receiving a touch instruction of any visible article in the observation picture; and
and providing introduction information of the article in response to the touch instruction.
Optionally, the introductory information includes at least one of the following information about the item: size information, price information, brand information, and sales information.
Optionally, the method for generating the click command includes:
generating a direction point based on the user point touch position;
generating a direction line in the connecting line direction between the virtual observation point and the direction point; and
under the condition that the direction lines intersect with the surface of a visible object in the observation picture for the first time, generating a point touch instruction for the corresponding object;
and the direction point is a spatial coordinate point of the point in the VR scene.
The embodiment of the invention also provides a method for checking the article label in the VR model, which comprises the following steps:
determining position information and view angle information of a virtual observation point in the VR model, and determining a displayed observation picture based on the position information and the view angle information;
determining visible articles in the observation picture according to the observation picture;
receiving a point touch instruction of a label of any article in the observation picture; and
and expanding the clicked contents in the label in response to the click command.
Optionally, the content in the tag includes at least one of the following information: size information, price information, brand information, and sales information.
The embodiment of the invention also provides a system for checking the information of the articles in the VR model, which comprises:
a data acquisition module for performing the following operations:
determining position information and view information of a virtual viewpoint within the VR model; and
receiving a point touch instruction of a user for any visible article in the observation picture; and
a processor to perform the following operations:
determining a displayed observation picture based on the position information and the view angle information;
determining visible articles in the observation picture according to the observation picture;
and receiving the touch instruction, and providing introduction information of the corresponding item in response to the touch instruction.
Optionally, the introductory information includes at least one of the following information about the item: size information, price information, brand information, and sales information.
Optionally, the processor is further configured to:
generating a direction point based on the user point touch position;
generating a direction line in the connecting line direction between the virtual observation point and the direction point; and
under the condition that the direction lines intersect with the surface of a visible object in the observation picture for the first time, generating a point touch instruction for the corresponding object;
and the direction point is a spatial coordinate point of the point in the VR scene.
An embodiment of the present invention further provides a system for viewing an item tag in a VR model, where the system includes:
a data acquisition module for performing the following operations:
determining position information and view information of a virtual viewpoint within the VR model; and
receiving a point touch instruction of a label of any article in the observation picture;
a processor to perform the following operations:
determining a displayed observation picture based on the position information and the view angle information;
determining visible articles in the observation picture according to the observation picture;
and expanding the clicked contents in the label in response to the click command.
Optionally, the content in the tag includes at least one of the following information: size information, price information, brand information, and sales information.
In another aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes:
at least one processor, at least one memory, a communication interface, and a bus; wherein
The processor, the memory and the communication interface complete mutual communication through the bus;
the communication interface is used for information transmission between the electronic equipment and the communication equipment of the terminal;
the memory stores program instructions executable by the processor to invoke the program instructions to perform the above-described method of viewing item information within a VR model and/or method of viewing item tags within a VR model.
In another aspect, the present invention provides a non-transitory computer readable storage medium storing computer instructions that cause the computer to perform the above-described method of viewing item information within a VR model and/or method of viewing item tags within a VR model.
By the technical scheme, the displayed observation picture can be determined based on the position information and the visual angle information by determining the position information and the visual angle information of the virtual observation point in the VR model in the VR house-watching process of the user; determining visible articles in the observation picture according to the observation picture; the touch instruction of the user to any visible article in the observation picture is received, and the introduction information of the corresponding article can be provided in response to the touch instruction of the user to any visible article in the observation picture, so that the user can completely know the simulated decoration scheme to be checked, the detailed introduction information of each placed article in a room can be specifically known, and the problem that the user can not know the information of the placed articles by seeing the room through VR is avoided.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
fig. 1 is a schematic flow chart of a method for viewing information of an item in a VR model according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram illustrating a method for viewing item tags in a VR model according to an embodiment of the invention;
FIG. 3 is a drawing illustrating a display of a viewing frame of a living room in a VR model according to an embodiment of the present invention;
FIG. 4 is a flow chart of the structure of an in-VR model item information viewing system according to an embodiment of the present invention;
fig. 5 is a block diagram of the electronic device provided in this embodiment.
Description of the reference numerals
201 click position 202 television
203 TV tag 204 sofa tag
501 processor 502 memory
503 communication interface 504 bus
505 data acquisition module
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a basic flowchart illustrating a method for viewing item information in a VR model according to an embodiment of the present invention, where as shown in fig. 1, the method includes:
s11, determining the position information and the view angle information of the virtual observation point in the VR model, and determining the displayed observation picture disc based on the position information and the view angle information;
s12, determining visible articles in the observation picture according to the observation picture;
s13, receiving a point touch instruction for any visible article in the observation picture; and
s14, responding to the point touch instruction, providing introduction information of the article.
The user is seeing the room in-process through the VR, can let the customer just can experience the experience of seeing the room of being personally on the spot not being out of home, can also see looking over of house simulation fitment effect based on the VR technique of seeing the room, and the same can let the customer experience simulation fitment effect as being personally on the spot. In the process of watching a house by a user through VR, every time a virtual observation point moves, a corresponding observation picture changes, and after the observation picture changes, visible articles in the picture also change correspondingly. The position of the virtual viewpoint in the VR model can be regarded as the position of the user in the room, the visual angle of the virtual viewpoint in the VR model can be regarded as the visual direction of the user when observing the room, and the picture observed at a certain observation angle at a certain point in the room is a fixed certain area range due to the limited visual angle. In the pre-stored VR model, the corresponding observation picture can be determined according to the position of the virtual observation point in the VR model and the visual angle of the virtual observation point in the VR model, and the visible articles in the observation picture can be further determined according to the determined observation picture. To further understand the detailed information of the items in the room in the simulated decoration scheme, the user can click on any visible item in the viewing screen, for example, click on the sofa in the viewing screen to further understand the specific information of the sofa. Specifically, the method can receive a touch instruction of a user for any visible article in the observation picture, provides introduction information of the corresponding article in response to the touch instruction, and realizes that the user knows the simulation decoration effect by watching a room through VR and simultaneously knows detailed introduction information of articles placed in each room in the simulation decoration scheme.
For the method for viewing information of an item in a VR model provided by the embodiment of the present invention, the introduction information includes at least one of the following information about the item: size information, price information, brand information, and sales information.
Wherein, the introduction information about the article may include at least one of the following information: size information, price information, brand information, and sales information. For example, when the user clicks the sofa in the observation screen, brand information, material information, internal structure information, size information and market price information of the sofa are provided, so that the user can know the parameter information and price information of each article while knowing the placement position of the article in the simulated decoration scheme, and more determining factors are provided for the user to select from a plurality of simulated decoration schemes.
Aiming at the checking method of the item information in the VR model provided by the embodiment of the invention, the generating method of the point touch instruction comprises the following steps:
s131, generating a direction point based on the user point contact position;
s132, generating a direction line in the connecting line direction between the virtual observation point and the direction point; and
s133, under the condition that the direction line is intersected with the surface of a visible object in the observation picture for the first time, generating a point touch instruction aiming at the corresponding object;
and the direction point is a spatial coordinate point of the point in the VR scene.
Wherein, aiming at a certain observation picture, a user clicks a certain article on the observation picture, a direction point can be generated based on the point contact position of the user, the direction point is a space coordinate point in a VR model, wherein in the pre-stored VR model, based on the control of the user, the space coordinate position of the virtual observation point in the VR model can be determined after each movement, and the space coordinate position of the article placed in each room in the VR model in the pre-stored simulation decoration scheme can also be determined, a direction line is generated in the connecting direction between the virtual observation point and the direction point, under the condition that the direction line is firstly intersected with the surface of a certain visible object in the observation picture, a point contact instruction aiming at the corresponding object is generated, namely under the condition that the space coordinate of a certain point on the direction line is firstly intersected with the space coordinate of a certain object in the observation picture, the item clicked by the user can be judged, a point touch instruction for the item is further generated, and the introduction information of the corresponding item is further provided for the user according to the point touch instruction.
Fig. 2 is a schematic flowchart illustrating a method for viewing an item tag in a VR model according to an embodiment of the present invention, and as shown in fig. 2, the method for viewing an item tag in a VR model includes:
determining position information and view angle information of a virtual observation point in the VR model, and determining a displayed observation picture based on the position information and the view angle information;
determining visible articles in the observation picture according to the observation picture;
receiving a point touch instruction of a label of any article in the observation picture; and
and expanding the clicked contents in the label in response to the click command.
In the process of watching a house by a user through VR, every time a virtual observation point moves, a corresponding observation picture changes, and after the observation picture changes, visible articles in the picture also change correspondingly. The position of the virtual viewpoint in the VR model can be regarded as the position of the user in the room, the visual angle of the virtual viewpoint in the VR model can be regarded as the visual direction of the user when observing the room, and the picture observed at a certain observation angle at a certain point in the room is a fixed certain area range due to the limited visual angle. In the pre-stored VR model, the corresponding observation picture can be determined according to the position of the virtual observation point in the VR model and the visual angle of the virtual observation point in the VR model, and the visible articles in the observation picture can be further determined according to the determined observation picture. To further understand the detailed information of the items in the room in the simulated decoration scheme, the user can click on the tag of any visible item in the viewing screen, for example, the tag of the sofa in the viewing screen to further understand the specific information of the sofa. Specifically, the method can receive a touch instruction of a user for a label of any visible article in the observation picture, provide introduction information of the corresponding article in response to the touch instruction, and realize that the user knows the detailed introduction information of the article placed in each room in the simulation decoration scheme while seeing the room through the VR.
The method for viewing the label of the article in the VR model provided by the embodiment of the invention includes at least one of the following information about the article: size information, price information, brand information, and sales information.
Wherein, regarding the tag content of the article, the content in the tag may include at least one of the following information: size information, price information, brand information, and sales information. For example, when the user clicks the label of the sofa in the observation screen, the brand information, the material information, the internal structure information, the size information and the market price information of the sofa are provided, so that the user can know the information of each article while knowing the placement position of the article in the simulated decoration scheme, and more determining factors are provided for the user to select from a plurality of simulated decoration schemes.
First method embodiment
Fig. 3 is a drawing illustrating a VR-based living room observation display according to an embodiment of the present invention, and as shown in fig. 3, after a virtual viewpoint moves, position information and view angle information of the virtual viewpoint in the VR model may be determined, and a displayed observation screen may be determined based on the position information and the view angle information. And further determining visible articles in the observation picture according to the determined observation picture. Determining that the observation picture is an observation picture of the virtual observation point at a certain position of the living room according to the position information and the view angle information of the virtual observation point in the VR model, wherein the user wants to know introduction information of a television placed in the living room during the observation process, clicking the television 202 on the observation picture, generating a direction point based on a point touch position 201 of the user, generating a direction line in a connecting direction between the current virtual observation point and the direction point, generating a point touch instruction for the television 202 when determining that a spatial coordinate of a certain point on the direction line falls on a screen surface of the television 202, and providing the introduction information of the television 202 in response to the point touch instruction for the television 202, specifically as shown in the following table:
TABLE 1
Television brand Millet
Reference price of goods 3499
Screen ratio 16:9
Back light source LED
Screen resolution Ultra-high definition 4K
Size of screen 65 EnglishCun (inch)
Product color Golden color
Second method embodiment
First, after the virtual viewpoint moves, the position information and the view angle information of the virtual viewpoint in the VR model are determined, and the displayed observation picture is determined based on the position information and the view angle information. And further determining visible articles in the observation picture according to the determined observation picture. Determining that the observation picture is an observation picture of the virtual observation point at a certain position in the living room according to the position information and the view angle information of the virtual observation point in the VR model, wherein a user wants to know the detailed information of the television placed in the living room during the observation process, and clicks the television tag 203 on the observation picture, and in response to a click-touch instruction for the television tag 203, the tag content of the television tag 203 is expanded, as shown in the following table:
TABLE 1
Television brand Millet
Reference price of goods 3499
Screen ratio 16:9
Back light source LED
Screen resolution Ultra-high definition 4K
Size of screen 65 inches
Product color Golden color
Fig. 4 is a flow chart illustrating a structure of a system for viewing information about an object in a VR model according to an embodiment of the present invention, and as shown in fig. 4, the system for viewing information about an object in a VR model includes:
a data acquisition module for performing the following operations:
determining position information and view information of a virtual viewpoint within the VR model; and
receiving a point touch instruction of a user for any visible article in the observation picture;
a processor to perform the following operations:
determining a displayed observation picture based on the position information and the view angle information;
determining visible articles in the observation picture according to the observation picture;
and receiving the touch instruction, and providing introduction information of the corresponding item in response to the touch instruction.
The system can comprise a data acquisition module and a processor, wherein when a user looks at a room at a VR, a corresponding observation picture changes every time a virtual observation point moves, and visible articles in the picture correspondingly change after the observation picture changes, specifically, the position information of the virtual observation point in a VR model can be obtained in real time through the data acquisition module, and the visual angle information of the virtual observation point in the VR model can be obtained in real time, wherein the position of the virtual observation point in the VR model can be regarded as the position of the user in the room, the visual angle of the virtual observation point in the VR model can be regarded as the visual direction of the user when the user observes in the room, and the picture observed at a certain observation angle at a certain point in the room is a fixed area range due to the limited visual angle. In the pre-stored VR scene data, the processor can determine a corresponding observation picture according to the position information of the virtual observation point in the VR model and the visual angle information of the virtual observation point in the VR model, and further determine visible articles in the observation picture according to the determined observation picture. The user is seeing the house in-process through the VR, can let the customer just can experience the experience of seeing the house of being personally on the scene of staying alone, can also see looking the house technique and carrying out looking over of house simulation decoration effect based on the VR, the same customer can experience the simulation decoration effect of experiencing personally on the scene, for the detailed information of putting article in further understanding the room in the simulation decoration scheme, the user can click and observe arbitrary visible article in the picture, for example click and observe the sofa in the picture in order to further understand the specific information of this sofa, this data acquisition module is arranged in receiving user to the point touch instruction of sofa in the observation picture, this treater responds to this point touch instruction, provide the introduction information of corresponding article, realize that the user sees the room through the VR and knows the detailed introduction information of putting article in each room in the simulation decoration scheme when seeing the house and knowing the simulation decoration effect.
For the viewing system of the item information in the VR model provided by the embodiment of the present invention, the introduction information includes at least one of the following information about the item: size information, price information, brand information, and sales information.
Wherein, the introduction information about the article may include at least one of the following information: size information, price information, brand information, and sales information. For example, when the user clicks the sofa in the observation screen, brand information, material information, internal structure information, and size information) and market price information of the sofa are provided, so that the user can know the parameter information and price information of each article while knowing the placement position of the article in the simulated decoration scheme, and more determining factors are provided for the user to select from a plurality of simulated decoration schemes.
For the system for viewing information of an item in a VR model provided by the embodiment of the present invention, the processor is further configured to perform the following operations:
generating a direction point based on the user point touch position;
generating a direction line in the connecting line direction between the virtual observation point and the direction point; and
under the condition that the direction lines intersect with the surface of a visible object in the observation picture for the first time, generating a point touch instruction for the corresponding object;
and the direction point is a spatial coordinate point of the point in the VR scene.
Aiming at a certain observation picture, a user clicks a certain article on the observation picture, the processor can generate a direction point based on the point contact position of the user, the direction point is a space coordinate point in the VR model, wherein in the pre-stored VR model, the space coordinate position of the virtual observation point in the VR model can be determined after each movement of the virtual observation point based on the control of the user, the space coordinate position of the article placed in each room in the VR model in the pre-stored simulation decoration scheme can also be determined, a direction line is generated in the connecting direction between the virtual observation point and the direction point, the processor generates a point contact instruction aiming at the corresponding object under the condition that the direction line is firstly intersected with the surface of a certain visible object in the observation picture, namely under the condition that the space coordinate of a certain point on the direction line is firstly intersected with the space coordinate of a certain object in the observation picture, the item clicked by the user can be judged, a point touch instruction for the item is further generated, and the introduction information of the corresponding item is further provided for the user according to the point touch instruction.
The embodiment of the invention provides a system for viewing article labels in a VR model, which comprises:
a data acquisition module for performing the following operations:
determining position information and view information of a virtual viewpoint within the VR model; and
receiving a point touch instruction of a label of any article in the observation picture;
a processor to perform the following operations:
determining a displayed observation picture based on the position information and the view angle information;
determining visible articles in the observation picture according to the observation picture;
and expanding the clicked contents in the label in response to the click command.
In the VR house-viewing process, each time a virtual observation point moves, a corresponding observation picture changes, and after the observation picture changes, visible articles in the picture also change correspondingly, specifically, position information of the virtual observation point in the VR model can be obtained in real time through a data acquisition module, and visual angle information of the virtual observation point in the VR model can be obtained in real time, wherein the position of the virtual observation point in the VR model can be regarded as the position of a user in a room, and the visual angle of the virtual observation point in the VR model can be regarded as the visual direction of the user when the user observes in the room. In the pre-stored VR scene data, the processor can determine a corresponding observation picture according to the position information of the virtual observation point in the VR model and the visual angle information of the virtual observation point in the VR model, and further determine visible articles in the observation picture according to the determined observation picture. In the house watching process through VR, the user can feel the experience of personally seeing the house without going out, the house simulation decoration effect can be checked based on the VR house watching technology, the customer can also experience the simulation decoration effect personally, in order to further know the detailed information of the articles placed in the room in the simulation decoration scheme, the user can click the label of any visible article in the observation picture, such as clicking on the label 204 of the sofa within the viewing screen to further understand the sofa's specific information, the data acquisition module is used for receiving a touch instruction of a user for the sofa label 204 in the observation picture, the processor responds to the point-touching instruction, expands the content of the sofa label 204, and realizes that the user can see the room through the VR to know the simulation decoration effect and simultaneously know the detailed introduction information of the placed articles in each room in the simulation decoration scheme.
A viewing system for an item tag within a VR model provided by an embodiment of the invention, the content of the tag including at least one of the following information about the item: size information, price information, brand information, and sales information.
Wherein, regarding the identity of the label of the article, the content of the label may include at least one of the following information: size information, price information, brand information, and sales information. For example, when the user clicks the sofa in the observation screen, brand information, material information, internal structure information, and size information) and market price information of the sofa are provided, so that the user can know the parameter information and price information of each article while knowing the placement position of the article in the simulated decoration scheme, and more determining factors are provided for the user to select from a plurality of simulated decoration schemes.
The method provided by the embodiment is generally executed by a terminal, such as a mobile phone or a computer, and the embodiment is not limited thereto. The terminal is contained in the indoor article information viewing system based on the VR, a user can watch a house through the VR by holding the terminal, and can simulate the decoration scheme, specifically, any visible article on a motor observation picture on a mobile phone screen can be observed, namely, a point contact instruction of the user for any visible article in the observation picture is received, and after the instruction response, introduction information of the article clicked by the user can be selectively displayed in any area of the terminal screen.
Fig. 5 shows a block diagram of the electronic device provided in the present embodiment.
Referring to fig. 5, the electronic device includes: a processor 501, a memory 502, a communication interface 503, and a bus 504; wherein the content of the first and second substances,
the processor 501, the memory 502 and the communication interface 503 complete mutual communication through the bus 504;
the communication interface 503 is used for information transmission between the electronic device and the communication device of the terminal;
the processor 501 is configured to call program instructions in the memory 502 to perform the methods provided by the above-mentioned method embodiments, for example, including: determining position information and view angle information of a virtual observation point in the VR model, and determining a displayed observation picture based on the position information and the view angle information; determining visible articles in the observation picture according to the observation picture; receiving a touch instruction of any visible article in the observation picture; and providing introduction information of the article in response to the touch instruction. Generating a direction point based on the user point touch position; generating a direction line in the connecting line direction between the virtual observation point and the direction point; and generating a point touch instruction for a corresponding object under the condition that the direction line firstly intersects with the surface of a certain visible object in the observation picture; and the direction point is a spatial coordinate point of the point in the VR scene. Determining position information and view angle information of a virtual observation point in the VR model, and determining a displayed observation picture based on the position information and the view angle information; determining visible articles in the observation picture according to the observation picture; receiving a point touch instruction of a label of any article in the observation picture; and expanding the clicked contents in the label in response to the click command. The content in the tag comprises at least one of the following information: size information, price information, brand information, and sales information.
The present embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the methods provided by the above method embodiments, for example, including: determining position information and view angle information of a virtual observation point in the VR model, and determining a displayed observation picture based on the position information and the view angle information; determining visible articles in the observation picture according to the observation picture; receiving a touch instruction of any visible article in the observation picture; and providing introduction information of the article in response to the touch instruction. Generating a direction point based on the user point touch position; generating a direction line in the connecting line direction between the virtual observation point and the direction point; and generating a point touch instruction for a corresponding object under the condition that the direction line firstly intersects with the surface of a certain visible object in the observation picture; and the direction point is a spatial coordinate point of the point in the VR scene. Determining position information and view angle information of a virtual observation point in the VR model, and determining a displayed observation picture based on the position information and the view angle information; determining visible articles in the observation picture according to the observation picture; receiving a point touch instruction of a label of any article in the observation picture; and expanding the clicked contents in the label in response to the click command. The content in the tag comprises at least one of the following information: size information, price information, brand information, and sales information.
The present embodiments disclose a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the above-described methods provided by the above-described method embodiments.
Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, the embodiments of the present invention are not limited to the details of the above embodiments, and various simple modifications can be made to the technical solutions of the embodiments of the present invention within the technical idea of the embodiments of the present invention, and the simple modifications all belong to the protection scope of the embodiments of the present invention.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, the embodiments of the present invention do not describe every possible combination.
Those skilled in the art will understand that all or part of the steps in the method according to the above embodiments may be implemented by a program, which is stored in a storage medium and includes several instructions to enable a single chip, a chip, or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.

Claims (10)

1. A method for viewing item information within a VR model, the method comprising:
determining position information and visual angle information of virtual observation point information in the VR model, and determining a displayed observation picture based on the position information and the visual angle information;
determining visible articles in the observation picture according to the observation picture;
receiving a touch instruction of any visible article in the observation picture; and
and providing introduction information of the article in response to the touch instruction.
2. A method of viewing item information within a VR model as claimed in claim 1, wherein the introductory information includes at least one of the following information about the item: size information, price information, brand information, and sales information.
3. A method for viewing item information in a VR model as claimed in claim 1, wherein the method for generating the click-to-touch command includes:
generating a direction point based on the user point touch position;
generating a direction line in the connecting line direction between the virtual observation point and the direction point; and
under the condition that the direction lines intersect with the surface of a visible object in the observation picture for the first time, generating a point touch instruction for the corresponding object;
and the direction point is a spatial coordinate point of the point in the VR scene.
4. A method for viewing item labels within a VR model, the method comprising:
determining position information and view angle information of a virtual observation point in the VR model, and determining a displayed observation picture based on the position information and the view angle information;
determining visible articles in the observation picture according to the observation picture;
receiving a point touch instruction of a label of any article in the observation picture; and
and expanding the clicked contents in the label in response to the click command.
5. The method of viewing an item tag within a VR model of claim 4, wherein the content in the tag includes at least one of: size information, price information, brand information, and sales information.
6. A system for viewing item information within a VR model, the system comprising:
a data acquisition module for performing the following operations:
determining position information and view information of a virtual viewpoint within the VR model; and
receiving a point touch instruction of a user for any visible article in the observation picture;
a processor to perform the following operations:
determining a displayed observation picture based on the position information and the view angle information;
determining visible articles in the observation picture according to the observation picture;
and receiving the touch instruction, and providing introduction information of the corresponding item in response to the touch instruction.
7. A viewing system for item information within a VR model as claimed in claim 6, wherein the introductory information includes at least one of the following information about the item: size information, price information, brand information, and sales information.
8. A system for viewing item information within a VR model as recited in claim 6, wherein the processor is further configured to:
generating a direction point based on the user point touch position;
generating a direction line in the connecting line direction between the virtual observation point and the direction point; and
under the condition that the direction lines intersect with the surface of a visible object in the observation picture for the first time, generating a point touch instruction for the corresponding object;
and the direction point is a spatial coordinate point of the point in the VR scene.
9. A system for viewing item labels within a VR model, the system comprising:
a data acquisition module for performing the following operations:
determining position information and view information of a virtual viewpoint within the VR model; and
receiving a point touch instruction of a label of any article in the observation picture;
a processor to perform the following operations:
determining a displayed observation picture based on the position information and the view angle information;
determining visible articles in the observation picture according to the observation picture;
and expanding the clicked contents in the label in response to the click command.
10. A viewing system for item tags within a VR model as claimed in claim 9, wherein the content in the tags includes at least one of the following information: size information, price information, brand information, and sales information.
CN201910570127.XA 2019-06-27 2019-06-27 Method, system and terminal for checking article information and article label in VR model Pending CN112150223A (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN201910570127.XA CN112150223A (en) 2019-06-27 2019-06-27 Method, system and terminal for checking article information and article label in VR model
JP2021576083A JP7287509B2 (en) 2019-06-27 2020-06-28 Method and apparatus for displaying item information in current space and media
AU2020304463A AU2020304463B2 (en) 2019-06-27 2020-06-28 Method and apparatus for displaying item information in current space, and medium
PCT/CN2020/098593 WO2020259694A1 (en) 2019-06-27 2020-06-28 Method and apparatus for displaying item information in current space, and medium
CA3145342A CA3145342A1 (en) 2019-06-27 2020-06-28 Method and apparatus for displaying item information in current space, and medium
US17/095,702 US11120618B2 (en) 2019-06-27 2020-11-11 Display of item information in current space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910570127.XA CN112150223A (en) 2019-06-27 2019-06-27 Method, system and terminal for checking article information and article label in VR model

Publications (1)

Publication Number Publication Date
CN112150223A true CN112150223A (en) 2020-12-29

Family

ID=73868895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910570127.XA Pending CN112150223A (en) 2019-06-27 2019-06-27 Method, system and terminal for checking article information and article label in VR model

Country Status (1)

Country Link
CN (1) CN112150223A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115454255A (en) * 2022-10-09 2022-12-09 如你所视(北京)科技有限公司 Article display switching method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600709A (en) * 2016-12-15 2017-04-26 苏州酷外文化传媒有限公司 Decoration information model-based VR virtual decoration method
US20180114341A1 (en) * 2016-10-26 2018-04-26 Zhonglian Shengshi Culture (Beijing) Co., Ltd. Image Display Method, Client Terminal and System, and Image Sending Method and Server
CN108492379A (en) * 2018-03-23 2018-09-04 平安科技(深圳)有限公司 VR sees room method, apparatus, computer equipment and storage medium
CN108877848A (en) * 2018-05-30 2018-11-23 链家网(北京)科技有限公司 The method and device that user's operation is coped in room mode is said in virtual three-dimensional space

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180114341A1 (en) * 2016-10-26 2018-04-26 Zhonglian Shengshi Culture (Beijing) Co., Ltd. Image Display Method, Client Terminal and System, and Image Sending Method and Server
CN106600709A (en) * 2016-12-15 2017-04-26 苏州酷外文化传媒有限公司 Decoration information model-based VR virtual decoration method
CN108492379A (en) * 2018-03-23 2018-09-04 平安科技(深圳)有限公司 VR sees room method, apparatus, computer equipment and storage medium
CN108877848A (en) * 2018-05-30 2018-11-23 链家网(北京)科技有限公司 The method and device that user's operation is coped in room mode is said in virtual three-dimensional space

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115454255A (en) * 2022-10-09 2022-12-09 如你所视(北京)科技有限公司 Article display switching method and device, electronic equipment and storage medium
CN115454255B (en) * 2022-10-09 2024-02-13 如你所视(北京)科技有限公司 Switching method and device for article display, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
JP6798094B2 (en) Methods and devices for marking and displaying space sizes in virtual 3D house models
US9779444B2 (en) Recommendations utilizing visual image analysis
CN109829981B (en) Three-dimensional scene presentation method, device, equipment and storage medium
US20190311544A1 (en) Image processing for augmented reality
EP2420946A2 (en) User terminal, remote terminal, and method for sharing augmented reality service
CN110176197B (en) Holographic display method, system, storage medium and equipment
CN108765581B (en) Method and device for displaying label in virtual three-dimensional space
CN108037904B (en) Visual data pushing method and system
US10921796B2 (en) Component information retrieval device, component information retrieval method, and program
CN111338721A (en) Online interaction method, system, electronic device and storage medium
CN112150223A (en) Method, system and terminal for checking article information and article label in VR model
CN112200899B (en) Method for realizing model service interaction by adopting instantiation rendering
CN111767456A (en) Method and device for pushing information
CN108388395B (en) Image clipping method and device and terminal
CN109087399B (en) Method for rapidly synchronizing AR space coordinate system through positioning map
CN110415019A (en) The advertisement recommended method and Related product of display screen
CN104618499A (en) Information processing method and electronic equipment
CN112860060B (en) Image recognition method, device and storage medium
CN112651801B (en) Method and device for displaying house source information
CN113888257A (en) Article-based display method, device and program product
CN112535392B (en) Article display system based on optical communication device, information providing method, apparatus and medium
CN114003323A (en) Information display method, device, equipment and storage medium
CN107145313A (en) Method for displaying image and device
CN111290721A (en) Online interaction control method, system, electronic device and storage medium
CN103870227B (en) A kind of method for displaying image and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210412

Address after: 100085 Floor 101 102-1, No. 35 Building, No. 2 Hospital, Xierqi West Road, Haidian District, Beijing

Applicant after: Seashell Housing (Beijing) Technology Co.,Ltd.

Address before: Unit 05, room 112, 1 / F, block C, comprehensive service area, Nangang Industrial Zone, Binhai New Area, Tianjin 300280

Applicant before: BEIKE TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220408

Address after: 100085 8th floor, building 1, Hongyuan Shouzhu building, Shangdi 6th Street, Haidian District, Beijing

Applicant after: As you can see (Beijing) Technology Co.,Ltd.

Address before: 100085 Floor 101 102-1, No. 35 Building, No. 2 Hospital, Xierqi West Road, Haidian District, Beijing

Applicant before: Seashell Housing (Beijing) Technology Co.,Ltd.