WO2020168792A1 - Augmented reality display method and apparatus, electronic device, and storage medium - Google Patents

Augmented reality display method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2020168792A1
WO2020168792A1 PCT/CN2019/125029 CN2019125029W WO2020168792A1 WO 2020168792 A1 WO2020168792 A1 WO 2020168792A1 CN 2019125029 W CN2019125029 W CN 2019125029W WO 2020168792 A1 WO2020168792 A1 WO 2020168792A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional model
virtual item
real scene
image
information
Prior art date
Application number
PCT/CN2019/125029
Other languages
French (fr)
Chinese (zh)
Inventor
程侠宽
李一山
周熠
张岩
Original Assignee
北京三快在线科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京三快在线科技有限公司 filed Critical 北京三快在线科技有限公司
Publication of WO2020168792A1 publication Critical patent/WO2020168792A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Definitions

  • the present disclosure relates to the field of augmented reality (Augmented Reality, AR), and in particular to an augmented reality display method, augmented reality display device, electronic equipment, and computer-readable storage medium.
  • augmented reality Augmented Reality
  • the purpose of the present disclosure is to provide an augmented reality display method, augmented reality display device, electronic equipment, and computer readable storage medium, thereby at least to some extent overcome the inability to associate product images with scenes and consumption due to limitations of related technologies Problems such as inability to interact with the product.
  • an augmented reality display method includes: acquiring entity information in a collected real scene image; extracting size information of the three-dimensional model for a three-dimensional model of a virtual item; The entity information and the size information determine the display parameters of the three-dimensional model; based on the display parameters, the three-dimensional model image of the virtual item is displayed in the real scene image.
  • the acquiring entity information in the collected real scene image includes: identifying at least one entity in the real scene image; acquiring boundary information and location information of the entity .
  • the determining the display parameters of the three-dimensional model includes: determining the display size and zoom ratio of the three-dimensional model; and determining the display position of the three-dimensional model.
  • the augmented reality display method further includes: in response to an interactive instruction for the virtual item, displaying the processed three-dimensional image of the virtual item in the real scene image. Model image.
  • the displaying the processed three-dimensional model image of the virtual item in the real scene image includes: processing the three-dimensional model of the virtual item to determine After processing the display parameters of the three-dimensional model of the virtual item, the processing includes at least one of the following: color processing, position processing, display size processing, and scaling processing; based on the processed three-dimensional model of the virtual item Display parameters, displaying the processed three-dimensional model image of the virtual item in the real scene image.
  • the augmented reality display method further includes: generating an interaction instruction for the virtual item in response to a user's gesture action.
  • the generating an interactive instruction for the virtual item for the user's gesture action includes: when it is determined that the gesture action is a first gesture, generating an interactive instruction for moving the virtual item A first interaction instruction of the item; when it is determined that the gesture action is a second gesture, a second interaction instruction for zooming the virtual item is generated.
  • the three-dimensional model of the virtual item is generated by the following steps: acquiring video data of the virtual item; extracting a preset number of static images from the video data; The static image generates a three-dimensional model of the virtual item.
  • an augmented reality display device includes: an information acquisition module configured to acquire entity information in a collected real scene image; an information extraction module configured to target virtual items The three-dimensional model extracts the size information of the three-dimensional model; the parameter determination module is configured to determine the display parameters of the three-dimensional model according to the entity information and the size information; the image display module is configured based on the display parameters , Displaying the three-dimensional model image of the virtual item in the real scene image.
  • an electronic device including: a processor and a memory; wherein a computer readable instruction is stored on the memory, and the computer readable instruction is executed by the processor to implement any of the foregoing exemplary implementations Examples of augmented reality display methods.
  • a computer-readable storage medium having a computer program stored thereon, and the computer program, when executed by a processor, implements the augmented reality display method in any of the foregoing exemplary embodiments.
  • the display parameters of the three-dimensional model of the virtual item are determined by the entity in the real scene image, so that the user can see exactly how the virtual item fits in the real scene.
  • the human-computer interaction through gestures not only adds fun to the user's online shopping , So that users have a sense of immersion, shopping methods more flexible and accurate, and improve conversion rate.
  • FIG. 1 schematically shows a flowchart of an augmented reality display method in an exemplary embodiment of the present disclosure
  • FIG. 2 schematically shows a flowchart of partial steps of an augmented reality display method in an embodiment of the present disclosure
  • FIG. 3 schematically shows a flowchart of partial steps of an augmented reality display method in an embodiment of the present disclosure
  • FIG. 4 schematically shows a flowchart of partial steps of an augmented reality display method in an embodiment of the present disclosure
  • FIG. 5 schematically shows a flowchart of partial steps of an augmented reality display method in an embodiment of the present disclosure
  • FIG. 6 schematically shows a flowchart of steps of an augmented reality display method in an embodiment of the present disclosure
  • FIG. 7 schematically shows a flowchart of steps of an augmented reality display method in an embodiment of the present disclosure
  • FIG. 8 schematically shows a schematic diagram of an implementation scenario of an augmented reality display method in an exemplary embodiment of the present disclosure
  • FIG. 9 schematically shows a schematic structural diagram of an augmented reality display device in an exemplary embodiment of the present disclosure.
  • FIG. 10 schematically shows an electronic device for implementing an augmented reality display method in an exemplary embodiment of the present disclosure
  • FIG. 11 schematically shows a computer-readable storage medium for implementing an augmented reality display method in an exemplary embodiment of the present disclosure.
  • an augmented reality display method is first provided, which is applied to a terminal device.
  • the terminal device may be a computer or a notebook equipped with a camera, and multifunctional glasses with augmented reality display. It can also be a device with a smart terminal as the collection and processing side and a home TV with a communication link as the display side, and it can also be any other electronic terminal device with an augmented reality display function.
  • the augmented reality display method may mainly include the following steps:
  • Step S101 Obtain entity information in the collected real scene image.
  • the camera device of the smart terminal can be used to collect real scene images.
  • the camera device may include one camera or multiple cameras located at different angles. When the camera device only includes one camera, it can collect 360°, and collect a scene every 15°.
  • the collection distance of the scene can be set to five types: 1 meter, 2 meters, 3 meters, 4 meters and 5 meters. Distance, the collection environment can be set to two scenes: indoor during the day and indoor at night; correspondingly, when the camera device includes multiple cameras, only different collection distances and collection environments can be set.
  • the entity in the entity information may be an object contained in a real scene image, and the number of objects is not limited.
  • the entity due to their different environments, they refer to different objects.
  • the real scene is at home, the entity can be a coffee table, chair, sofa, table, etc.; when the real scene is a vehicle, the entity can be Seats, display screens, windows, etc.; when the real scene is an office, the entity can be a drinking fountain, desk, computer, telephone, etc.; when the real scene is a shopping mall, the entity can be a model, a full-length mirror, etc.
  • entity information refers to information related to the entity, such as size information, location information, color information, shape information, environmental information, etc., where environmental information is related to the environment in which the entity is located, such as the distance between the entity and the wall, The brightness of the environment, the distance from other entities in the environment, etc.
  • Step S102 Extract the size information of the three-dimensional model for the three-dimensional model of the virtual item.
  • the entity information in the collected real scene image can be obtained.
  • the size information of the three-dimensional model of the virtual item can be obtained.
  • virtual items are presented in the form of 3D models.
  • the virtual item may be an item in an online store, such as furniture, clothing, equipment, musical instruments, accessories, etc.
  • the item image can be obtained by entering the name on the display page to obtain the provided item image, or by using a camera tool to capture the item to obtain the item image.
  • the characteristic information of the three-dimensional model of the table may be its length, width, height, and other dimensions. Different size information can be obtained for different virtual items to facilitate subsequent operations.
  • Step S103 Determine the display parameters of the three-dimensional model according to the entity information and the size information.
  • the entity information in the real scene image is obtained in step S101, and the size information of the three-dimensional model of the virtual item is obtained in step S102.
  • the three-dimensional model of the online virtual item can be displayed in the real scene image through the entity information
  • size information determines the display parameters of the three-dimensional model. If the entity in the real scene image is a chair, and the entity information of the chair has been obtained, that is, its size, shape, position, and color information, when the virtual item is a table, it can be based on the size information and the obtained entity information
  • the size relationship of the table determines the size ratio relationship between the two to determine the display size of the table, and the display parameters such as the display position and adaptation color of the table are determined according to other entity information of the chair.
  • Step S104 Based on the display parameters, display the three-dimensional model image of the virtual item in the real scene image.
  • step S103 the display parameters of the three-dimensional model of the virtual item have been determined.
  • the three-dimensional model image of the virtual item can be displayed in the real scene image.
  • the relevant parameters of the camera device need to be set, and the virtual item’s
  • the 3D model image is used as a reference, and the 3D model image and the real scene image are superimposed.
  • the virtual item surrounds the entity in the real scene, or covers the entity, or is placed next to the entity.
  • the display of the three-dimensional model image of the virtual item in the real scene image can be on the glasses lens of the augmented reality display, or on the screen of the smart terminal, such as the display screen of a mobile phone, a computer, and a home TV.
  • the relevant display information of the three-dimensional model of the virtual item that needs to be displayed is determined by the entity in the real scene, thereby determining the most realistic virtual item size information; further, the virtual item The product is displayed in the real scene, which is of great reference value for users to determine whether they meet expectations and evaluate whether to purchase, improve the success rate of online purchases, and reduce the occurrence of returns and exchanges due to the inability to grasp product information, saving Cost and resources.
  • acquiring entity information in a real scene image includes the following steps:
  • Step S201 Identify at least one entity in the real scene image.
  • a real scene image can be collected, and in this step, entities in the real scene image can be identified.
  • the entity recognition in the real scene image can firstly be the recognition of the entire real scene image, or it can be the recognition of a specific area in the real scene image.
  • the preprocessing methods can include: downsampling, binarization, filtering, etc.
  • downsampling can be used to improve the performance of the real scene image, so that the real scene image meets the size of the area to be displayed; binarization can make the selected feature points not only depend on the color, but also more clearly; the use of Gaussian filtering can reduce noise and make The acquired image of the real scene is not affected by scratches, dust, etc.
  • Recognize entities in real scene images There is no limit to the number of recognized entities. It can be one or multiple. The more recognized entities, the greater the display size and/or scaling of the 3D model. The ratio and positioning of the display position are more accurate.
  • the way of recognizing an entity may be to recognize a specific color or a specific shape, as long as the required entity can be recognized, and this exemplary embodiment does not limit the recognition mode.
  • Step S202 Obtain boundary information and location information of the entity.
  • step S201 the entity in the real scene image has been identified.
  • relevant information of the identified entity can be obtained, and the relevant information may be boundary information and location information of the entity.
  • the boundary information includes information corresponding to the boundary of the space where the entity is located, and the location information includes information corresponding to the location of the entity in the real scene image.
  • the boundary information can be the length, width, and height of the entity.
  • a three-dimensional coordinate system of the entity in the real scene image can be established, and the camera coordinate system of the camera device that captures the real scene image can be established. According to the mutual transformation between the three-dimensional coordinate system and the camera coordinate system, Get the location information of the entity.
  • the boundary information of the entity can be determined by image recognition algorithms, such as Sobel edge detection algorithm, Laplacian operator, etc.
  • This embodiment can identify entities in the real scene image and determine its related information, and can accurately grasp the entity information in the real environment, which has very important reference significance for determining the display parameters of virtual items.
  • obtaining a three-dimensional model of a virtual item includes the following steps:
  • Step S301 Obtain video data of the virtual item.
  • the shooting terminal may include a surveillance camera, a front camera of a smart terminal, a rear camera of a smart terminal, etc., and may also include a processing terminal that processes video data after acquiring it.
  • the collected video data can be stored in the database and server as xml, json and other data.
  • Step S302. Extract a preset number of static images from the video data.
  • step S301 the video data of the virtual item has been collected.
  • static images can be extracted from the video data.
  • the frame number interval should not be set too small, otherwise the virtual item has not changed and the static image is extracted, which will cause the number of extracted static images to be too much, resulting in waste of resources; the frame number interval should not be set too large, otherwise the number of static images extracted will be too large If it is not enough, it will lead to insufficient extraction of effective information and reduce user experience.
  • the static images of the preset number of frames can include the front view, back view, left view, right view, top view and bottom view of the virtual item.
  • the method of extracting static images may be a coding method using the image compression standard of JPEG (Joint Photographic Experts Group, Joint Photographic Experts Group).
  • Step S303 Use the static image to generate a three-dimensional model of the virtual item.
  • step S302 several static images of the virtual item can be extracted.
  • the static images will be used to generate a three-dimensional model of the virtual item.
  • the technology of building a three-dimensional model through static images can use smart devices to perform graphic processing and three-dimensional calculations on the collected static images, thereby automatically generating a three-dimensional model of virtual objects.
  • This embodiment describes the generation of a three-dimensional model of a virtual item.
  • the static image is obtained through video data to construct the required three-dimensional model.
  • the generation method is simple and the operation is convenient.
  • the three-dimensional model constructed from the actual static image is more realistic and the product is improved.
  • the look and feel improves the user’s shopping experience.
  • determining the display parameters of the three-dimensional model includes the following steps:
  • Step S401 Determine the display size and zoom ratio of the three-dimensional model.
  • the display parameters of the 3D model can be determined so that it can not only reflect the real information of the virtual item, but also reflect its most appropriate and suitable size in the environment.
  • the user can intuitively and clearly Know whether the virtual item matches the user's real use environment.
  • the display size refers to the actual scene
  • the size of the 3D model is displayed in the image, such as the height, length, and width of the table.
  • the zoom ratio refers to the proportion of the 3D model that needs to be reduced or enlarged according to the actual scene image, such as reduced to 80% and enlarged to 120%.
  • Step S402 Determine the display position of the three-dimensional model.
  • step S401 the display size and zoom ratio of the three-dimensional model can be determined.
  • the display position of the three-dimensional model can be determined.
  • the display position refers to the position of the three-dimensional model in the real scene image.
  • the display position of the 3D model should be the most suitable position such as the placement position and the use position in the real scene image. For example, when the virtual item is a table, its display position should be directly in front of a stool or sofa, and keep an appropriate distance.
  • this embodiment obtains the size information of the three-dimensional model, and uses the proportional relationship between the two to determine the display size and zoom ratio corresponding to the best display effect of the three-dimensional model, as well as the display position, so as to avoid virtual Objects are too big or too small, and the abrupt feeling caused by poor placement.
  • the augmented reality display method further includes: in response to an interactive instruction for the virtual item, displaying a processed three-dimensional model image of the virtual item in the real scene image.
  • the three-dimensional model of the virtual item When the three-dimensional model of the virtual item is placed in the real scene image, it is impossible to determine that the default color at the beginning is the most suitable color, nor can it be determined that the initial display surface is the side with the best display effect.
  • the 3D model image is adjusted.
  • the user implements a gesture action it is equivalent to sending an interactive instruction for the virtual item, which can complete actions such as zooming, rotating, and switching the three-dimensional model.
  • the interaction process with the three-dimensional model image of the virtual item in the real scene image is completed.
  • displaying the processed three-dimensional model image of the virtual item in the real scene image includes: processing the three-dimensional model of the virtual item and determining the processed three-dimensional model image Display parameters of the three-dimensional model of the virtual item; based on the display parameters of the processed three-dimensional model of the virtual item, display the processed three-dimensional model image of the virtual item in the real scene image; wherein the above processing includes at least one of the following: Color processing, position processing, display size processing, and scaling processing.
  • color processing refers to adjusting the color of the 3D model
  • position processing refers to adjusting the display position of the 3D model
  • display size processing refers to adjusting the display size of the 3D model
  • zooming Scaling refers to adjusting the zoom ratio of the 3D model.
  • the user can interact with the 3D model of the virtual item in the real scene image, and display the 3D model image after the interaction, which can ensure the user's sense of real substitution and the personalization of different users.
  • the virtual items can be placed in the real scene in the best display state, ensuring the best display effect of online shopping.
  • the augmented reality display method further includes: generating an interactive instruction for the virtual item in response to the user's gesture action.
  • the user's hand dynamic image can be collected first.
  • the dynamic image of the user's hand is acquired through the camera device of the smart terminal or the depth camera on the augmented reality device, and the dynamic image of the hand is analyzed.
  • Acquiring real-time hand information of the user's hand may include valid gesture actions and invalid gesture actions, and valid gesture actions refer to preset gesture actions.
  • At least one preset gesture action should be set in the augmented reality device, for example, thumb and index finger closing, index finger rotation, index finger flipping and other gesture actions, but it is not limited to using specific fingers or specific actions.
  • a preset gesture action is set with corresponding operations, such as zooming, rotating, and switching virtual items.
  • the gesture action can be matched with the preset gesture action. If the matching is successful, the gesture action can perform the corresponding operation. If the matching is unsuccessful, the gesture action is regarded as an invalid gesture action.
  • the user interacts with the three-dimensional model of the virtual item and needs to adjust it, such as zooming, rotating, switching, etc., he can send an interactive instruction to perform related operations through the executed gesture operation.
  • This embodiment uses the user’s gestures to issue an interactive instruction to the virtual item to perform operations on the virtual item.
  • it can improve the fluency and accuracy of the user’s shopping, and facilitate the user to adjust the display state of the three-dimensional model to achieve the desired value.
  • it can break the barriers that cannot interact and communicate between users and products in traditional online shopping methods, and improve the immersion of users' online shopping experience.
  • generating an interactive instruction for the virtual item in response to a user's gesture action may include the following steps:
  • Step S501 When it is determined that the gesture action is the first gesture, a first interaction instruction for moving the virtual item is generated.
  • this step lists scenarios when it is determined that the acquired gesture action is the first gesture.
  • the acquired gesture action is a preset action and is the first gesture
  • the first gesture may be the index finger flip.
  • the gesture action occurs, it is equivalent to triggering the first interactive instruction, which corresponds to moving the virtual The manipulation of items.
  • Step S502 When it is determined that the gesture action is the second gesture, a second interaction instruction for zooming the virtual item is generated.
  • step S501 the user's gesture action has been acquired, and this step lists scenarios when it is determined that the acquired gesture action is the second gesture.
  • the second gesture may be the closing of the thumb and index finger.
  • the second interactive instruction can include two situations: when the distance between the thumb and the index finger changes from small to large, the virtual item can be reduced; when the distance between the thumb and index finger changes from large to small, it can be It is to enlarge the virtual item.
  • the second interactive instruction can include two situations: when the distance between the thumb and index finger is changed from large to small, the virtual item can be reduced; when the distance between the thumb and index finger is changed from large to small, it can be It is to enlarge the virtual item.
  • This embodiment lists the adjustment of the display state of the virtual items corresponding to the two gesture actions, and explains the user's interaction mode, so that the user can operate the virtual items more accurately when shopping, and the virtual items are more realistic and the shopping mode is better. For flexibility.
  • FIG. 6 schematically shows an implementation environment of the augmented reality display method of the present disclosure, and the implementation environment includes: a server 10 and a terminal device 20.
  • the server 10 refers to a device with computing capability and storage capability.
  • the server 10 may be a server, a server cluster composed of multiple servers, or a cloud computing center.
  • the merchant 30 can shoot the product.
  • the merchant 30 can shoot the product in 360 degrees, and then the merchant 30 uploads the captured video data to the server 10, and the server 10 receives the video data Then, the video data is processed to obtain a three-dimensional model.
  • the terminal device 20 refers to a device with data acquisition capabilities and image processing capabilities.
  • the terminal device 20 includes electronic terminals such as mobile phones, tablet computers, multimedia playback devices, and wearable devices.
  • the user 40 can view the merchandise of the merchant 30 in the terminal device 20.
  • the terminal device 20 receives the request of the user 40 and obtains the product from the server 10 according to the request.
  • the terminal device 20 may collect a real scene image, and obtain boundary information and position information of entities in the real scene image, and then synthesize the real scene image and the above-mentioned three-dimensional model.
  • the user 40 can also adjust the three-dimensional model in the terminal device 20, such as zooming, selecting a position, etc.
  • the terminal device 20 adjusts the three-dimensional model in the real scene image according to the operation of the user 40, and displays the adjusted composite image.
  • the terminal device 20 and the server 10 may communicate through a network.
  • the network may be a wired network or a wireless network.
  • the generation process of the three-dimensional model of the virtual desk lamp is as follows:
  • Step 910 the merchant photographs the physical desk lamp; the merchant refers to the seller who sells the physical desk lamp, and the physical desk lamp refers to the desk lamp displayed in the real physical world;
  • Step 920 The merchant uploads the video data obtained by shooting to the server; where the video data includes a virtual desk lamp, which is opposite to the above-mentioned physical desk lamp and refers to the desk lamp in the video data processed by the server or other equipment;
  • the server In step 930, the server generates a three-dimensional model of the virtual table lamp according to the video data; after the server obtains the video data, it can extract static images of a preset number of frames from the video data, and then generates a three-dimensional model of the virtual table lamp according to the static images, for example,
  • the three-dimensional model of the virtual desk lamp is shown as 1010 in Figure 8;
  • the real scene image is a room that includes a desk
  • the user can experience the display effect of the above-mentioned virtual desk lamp on the desk of the room through the terminal device.
  • the display method of the virtual desk lamp on the desk is as follows:
  • Step 940 The user determines to display the virtual table lamp; the user can select the virtual table lamp from the merchandise sold by the merchant, and determine to experience the display effect of the virtual table lamp in the real scene image in the terminal device;
  • Step 950 the terminal device obtains the three-dimensional model of the virtual table lamp; the terminal device obtains the three-dimensional model of the virtual table lamp from the server in response to the user's selection;
  • Step 960 The terminal device recognizes at least one entity in the real scene image, and determines the boundary information and position information of each entity;
  • the real scene image may be an image taken by the user using the terminal device, and the real scene image may include at least one entity,
  • the real scene image is a room 1020, and the real scene image includes a desk 1030.
  • the desk 1030 is an entity, and the terminal device can determine the boundary information and location information of the desk;
  • Step 970 The terminal device determines the display size, zoom ratio and display position of the 3D model according to the boundary information and location information of the entity, and displays the 3D model in the real scene image; the terminal device according to the 3D model of the virtual desk lamp and the real scene image
  • the boundary information and position information of the entity in the middle can calculate the display size, zoom ratio and display position of the 3D model.
  • the values of these parameters are the initial default values calculated by the terminal, and then the terminal displays the 3D model in the real scene image according to these parameters. For example, as shown in FIG. 8, the terminal device displays the three-dimensional model 1010 in the real scene image 1020 according to the desk 1030 and the three-dimensional model 1010 in the real scene image 1020;
  • step 980 the user adjusts the 3D model in the real scene image; the display position of the 3D model calculated by the terminal device may not be the best display, and the user can compare the 3D model in the real scene image according to their individual needs Make adjustments
  • step 990 the terminal device displays the adjusted three-dimensional model in the real scene image according to the user's adjustment; for example, as shown in FIG. 8, assuming that the user's adjustment is to enlarge the three-dimensional model 1010 and move it to the left, the terminal device The adjustment of the user displays the adjusted three-dimensional model 1010 in the real scene image 1020.
  • an augmented reality display device is also provided.
  • the augmented reality display device 600 may include: an information acquisition module 601, an information extraction module 602, a parameter determination module 603, and an image display module 604. among them:
  • the information acquisition module 601 is configured to acquire entity information in the collected real scene images; the information extraction module 602 is configured to extract size information of the three-dimensional model for the three-dimensional model of the virtual item; the parameter determination module 603 is configured to Configured to determine the display parameters of the three-dimensional model according to the entity information and the size information; the image display module 604 is configured to display the three-dimensional model of the virtual item in the real scene image based on the display parameters image.
  • modules or units of the augmented reality display device 600 are mentioned in the above detailed description, this division is not mandatory.
  • the features and functions of two or more modules or units described above may be embodied in one module or unit.
  • the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
  • an electronic device capable of implementing the above method is also provided.
  • the electronic device 700 according to this embodiment of the present invention will be described below with reference to FIG. 10.
  • the electronic device 700 shown in FIG. 10 is only an example, and should not bring any limitation to the function and application scope of the embodiment of the present invention.
  • the electronic device 700 is represented in the form of a general-purpose computing device.
  • the components of the electronic device 700 may include, but are not limited to: the aforementioned at least one processing unit 710, the aforementioned at least one storage unit 720, a bus 730 connecting different system components (including the storage unit 720 and the processing unit 710), and a display unit 740.
  • the storage unit stores program code, and the program code can be executed by the processing unit 710, so that the processing unit 710 executes the various exemplary methods described in the "exemplary method" section of this specification. Example steps.
  • the storage unit 720 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 721 and/or a cache storage unit 722, and may further include a read-only storage unit (ROM) 723.
  • RAM random access storage unit
  • ROM read-only storage unit
  • the storage unit 720 may also include a program/utility tool 724 having a set of (at least one) program module 725.
  • program module 725 includes but is not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples or some combination may include the implementation of a network environment.
  • the bus 730 may represent one or more of several types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any bus structure among multiple bus structures. bus.
  • the electronic device 700 can also communicate with one or more external devices 900 (such as keyboards, pointing devices, Bluetooth devices, etc.), and can also communicate with one or more devices that enable users to interact with the electronic device 700, and/or communicate with Any device (such as a router, modem, etc.) that enables the electronic device 700 to communicate with one or more other computing devices. This communication can be performed through an input/output (Input/Output, I/O) interface 750.
  • the electronic device 700 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 760.
  • networks for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet
  • the network adapter 740 communicates with other modules of the electronic device 700 through the bus 730. It should be understood that although not shown in the figure, other hardware and/or software modules can be used in conjunction with the electronic device 700, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives And data backup storage system, etc.
  • the exemplary embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, and the software product can be stored in a non-volatile storage medium (which can be a CD-ROM (Compact Disc Read-Only Memory)) , U disk, mobile hard disk, etc.) or on the network, including several instructions to make a computing device (may be a personal computer, server, terminal device, or network device, etc.) execute the method according to the embodiments of the present disclosure.
  • a non-volatile storage medium which can be a CD-ROM (Compact Disc Read-Only Memory)
  • U disk Compact Disc Read-Only Memory
  • mobile hard disk etc.
  • the network including several instructions to make a computing device (may be a personal computer, server, terminal device, or network device, etc.) execute the method according to the embodiments of the present disclosure.
  • a computer-readable storage medium on which is stored a program product capable of implementing the above method in this specification.
  • various aspects of the present invention may also be implemented in the form of a program product, which includes program code, and when the program product runs on a terminal device, the program code is used to make the The terminal device executes the steps according to various exemplary embodiments of the present invention described in the above "Exemplary Method" section of this specification.
  • a program product 800 for implementing the above method according to an embodiment of the present invention is described. It may adopt a portable compact disk read-only memory (CD-ROM) and include program code, and may be in a terminal device, For example, running on a personal computer.
  • CD-ROM compact disk read-only memory
  • the program product of the present invention is not limited thereto.
  • the readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, device, or device.
  • the program product can use any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
  • readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (Read Only Memory, ROM), Erasable Programmable Read-Only Memory (EPROM (Electrically Programmable Read-Only Memory) or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, Or any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM Erasable Programmable Read-Only Memory
  • CD-ROM compact disk read-only memory
  • optical storage devices magnetic storage devices, Or any suitable combination of the above.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the readable signal medium may also be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device.
  • the program code contained on the readable medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF (Radio Frequency), etc., or any suitable combination of the foregoing.
  • the program code used to perform the operations of the present invention can be written in any combination of one or more programming languages.
  • the programming languages include object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural styles. Programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
  • the remote computing device can be connected to a user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or can be connected to an external computing device (for example, using Internet service providers) Business to connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service providers Internet service providers

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An augmented reality display method and apparatus, an electronic device, and a computer-readable storage medium. The method comprises: obtaining entity information in a collected real scene image (S101); for a three-dimensional model of a virtual article, extracting size information of the three-dimensional model (S102); determining display parameters of the three-dimensional model according to the entity information and the size information (S103); and displaying a three-dimensional model image of the virtual article in the real scene image on the basis of the display parameters (S104). The display parameters of the three-dimensional model of the virtual article are determined by means of an entity in the real scene image, so that a user can accurately find out an adaptation condition between the virtual article and a real scene, and more real experience is brought to the user, thereby increasing the purchase rate, and reducing the return rate; furthermore, human-computer interaction by means of gesture actions increases interestingness for online shopping of the user, and also enables the user to have immersive experience; the shopping mode is more flexible and accurate, and the conversion rate is increased.

Description

增强现实显示方法与装置、电子设备、存储介质Augmented reality display method and device, electronic equipment, and storage medium
本公开要求于2019年02月18日提交的申请号为201910120539.3、发明名称为“增强现实显示方法与装置、电子设备、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。This disclosure claims the priority of a Chinese patent application filed on February 18, 2019 with the application number 201910120539.3 and the invention title "Augmented Reality Display Method and Device, Electronic Equipment, Storage Medium", the entire content of which is incorporated herein by reference Open.
技术领域Technical field
本公开涉及增强现实(Augmented Reality,AR)领域,尤其涉及一种增强现实显示方法与增强现实显示装置、电子设备及计算机可读存储介质。The present disclosure relates to the field of augmented reality (Augmented Reality, AR), and in particular to an augmented reality display method, augmented reality display device, electronic equipment, and computer-readable storage medium.
背景技术Background technique
随着科技的发展,电子商务慢慢走近人们的生活。随着线上购物的推广与普及,给用户的消费带来了全新的购物体验。传统线上购物模式是消费者通过图片形式查看商品,但是,消费者能够获取到的商品信息受到了极大地限制。这种情况下,需要一种其他购物方式向消费者提供所需的信息,使消费者可以直观地了解商品的相关资讯,随时随地体验商品功能等。With the development of technology, e-commerce is slowly approaching people's lives. With the promotion and popularization of online shopping, a new shopping experience has been brought to users' consumption. In the traditional online shopping mode, consumers view products in the form of pictures, but the product information that consumers can obtain is greatly restricted. In this case, an alternative shopping method is needed to provide consumers with the required information, so that consumers can intuitively understand product-related information and experience product functions anytime, anywhere.
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本公开的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。It should be noted that the information disclosed in the above background section is only used to strengthen the understanding of the background of the present disclosure, and therefore may include information that does not constitute the prior art known to those of ordinary skill in the art.
发明内容Summary of the invention
本公开的目的在于提供一种增强现实显示方法与增强现实显示装置、电子设备及计算机可读存储介质,进而至少在一定程度上克服由于相关技术的限制而导致的商品图像与场景无法关联及消费者与商品无法交互等问题。The purpose of the present disclosure is to provide an augmented reality display method, augmented reality display device, electronic equipment, and computer readable storage medium, thereby at least to some extent overcome the inability to associate product images with scenes and consumption due to limitations of related technologies Problems such as inability to interact with the product.
本公开的其他特性和优点将通过下面的详细描述变得显然,或部分地通过本公开的实践而习得。Other characteristics and advantages of the present disclosure will become apparent through the following detailed description, or partly learned through the practice of the present disclosure.
根据本公开的一个方面,提供一种增强现实显示方法,所述方法包括:获取采集到的现实场景图像中的实体信息;针对虚拟物品的三维模型,提取所述三维模型的尺寸信息;根据所述实体信息以及所述尺寸信息,确定所述三维模型的显示参数;基于所述显示参数,在所述现实场景图像中显示所述虚拟物品 的三维模型图像。According to an aspect of the present disclosure, an augmented reality display method is provided, the method includes: acquiring entity information in a collected real scene image; extracting size information of the three-dimensional model for a three-dimensional model of a virtual item; The entity information and the size information determine the display parameters of the three-dimensional model; based on the display parameters, the three-dimensional model image of the virtual item is displayed in the real scene image.
在本公开的一种示例性实施例中,所述获取采集到的现实场景图像中的实体信息,包括:识别所述现实场景图像中的至少一个实体;获取所述实体的边界信息和位置信息。In an exemplary embodiment of the present disclosure, the acquiring entity information in the collected real scene image includes: identifying at least one entity in the real scene image; acquiring boundary information and location information of the entity .
在本公开的一种示例性实施例中,所述确定所述三维模型的显示参数,包括:确定所述三维模型的显示尺寸和缩放比例;确定所述三维模型的显示位置。In an exemplary embodiment of the present disclosure, the determining the display parameters of the three-dimensional model includes: determining the display size and zoom ratio of the three-dimensional model; and determining the display position of the three-dimensional model.
在本公开的一种示例性实施例中,所述增强现实显示方法还包括:响应于针对所述虚拟物品的交互指令,在所述现实场景图像中显示经过处理后的所述虚拟物品的三维模型图像。In an exemplary embodiment of the present disclosure, the augmented reality display method further includes: in response to an interactive instruction for the virtual item, displaying the processed three-dimensional image of the virtual item in the real scene image. Model image.
在本公开的一种示例性实施例中,所述在所述现实场景图像中显示经过处理后的所述虚拟物品的三维模型图像,包括:对所述虚拟物品的三维模型进行处理,确定经过处理后的所述虚拟物品的三维模型的显示参数,所述处理包括以下至少一项:颜色处理、位置处理、显示尺寸处理以及缩放比例处理;基于经过处理后的所述虚拟物品的三维模型的显示参数,在所述现实场景图像中显示经过处理后的所述虚拟物品的三维模型图像。In an exemplary embodiment of the present disclosure, the displaying the processed three-dimensional model image of the virtual item in the real scene image includes: processing the three-dimensional model of the virtual item to determine After processing the display parameters of the three-dimensional model of the virtual item, the processing includes at least one of the following: color processing, position processing, display size processing, and scaling processing; based on the processed three-dimensional model of the virtual item Display parameters, displaying the processed three-dimensional model image of the virtual item in the real scene image.
在本公开的一种示例性实施例中,所述增强现实显示方法还包括:针对用户的手势动作生成针对所述虚拟物品的交互指令。In an exemplary embodiment of the present disclosure, the augmented reality display method further includes: generating an interaction instruction for the virtual item in response to a user's gesture action.
在本公开的一种示例性实施例中,所述针对用户的手势动作生成针对所述虚拟物品的交互指令,包括:当判断所述手势动作为第一手势时,生成用于移动所述虚拟物品的第一交互指令;当判断所述手势动作为第二手势时,生成用于缩放所述虚拟物品的第二交互指令。In an exemplary embodiment of the present disclosure, the generating an interactive instruction for the virtual item for the user's gesture action includes: when it is determined that the gesture action is a first gesture, generating an interactive instruction for moving the virtual item A first interaction instruction of the item; when it is determined that the gesture action is a second gesture, a second interaction instruction for zooming the virtual item is generated.
在本公开的一种示例性实施例中,所述虚拟物品的三维模型通过如下步骤生成:获取所述虚拟物品的视频数据;从所述视频数据中抽取预设帧数的静态图像;利用所述静态图像生成所述虚拟物品的三维模型。In an exemplary embodiment of the present disclosure, the three-dimensional model of the virtual item is generated by the following steps: acquiring video data of the virtual item; extracting a preset number of static images from the video data; The static image generates a three-dimensional model of the virtual item.
根据本公开的一个方面,提供一种增强现实显示装置,所述装置包括:获取信息模块,被配置为获取采集到的现实场景图像中的实体信息;提取信息模块,被配置为针对虚拟物品的三维模型,提取所述三维模型的尺寸信息;参数确定模块,被配置为根据所述实体信息以及所述尺寸信息,确定所述三维模型的显示参数;图像显示模块,被配置基于所述显示参数,在所述现实场景图像中显示所述虚拟物品的三维模型图像。According to one aspect of the present disclosure, an augmented reality display device is provided, the device includes: an information acquisition module configured to acquire entity information in a collected real scene image; an information extraction module configured to target virtual items The three-dimensional model extracts the size information of the three-dimensional model; the parameter determination module is configured to determine the display parameters of the three-dimensional model according to the entity information and the size information; the image display module is configured based on the display parameters , Displaying the three-dimensional model image of the virtual item in the real scene image.
根据本公开的一个方面,提供一种电子设备,包括:处理器和存储器;其 中,存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现上述任意示例性实施例的增强现实显示方法。According to one aspect of the present disclosure, there is provided an electronic device including: a processor and a memory; wherein a computer readable instruction is stored on the memory, and the computer readable instruction is executed by the processor to implement any of the foregoing exemplary implementations Examples of augmented reality display methods.
根据本公开的一个方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任意示例性实施例中的增强现实显示方法。According to one aspect of the present disclosure, there is provided a computer-readable storage medium having a computer program stored thereon, and the computer program, when executed by a processor, implements the augmented reality display method in any of the foregoing exemplary embodiments.
本公开的示例性实施例具有以下有益效果:The exemplary embodiments of the present disclosure have the following beneficial effects:
在本公开的示例性实施例提供的方法及装置中,通过现实场景图像中的实体确定虚拟物品的三维模型的显示参数,从而使用户确切地看出虚拟物品与现实场景的适配情况,带给用户更为真实的体验,提高购买率,降低因购回物品与真实使用情况不符合带来的退货率;进一步,通过手势动作进行人机交互,不仅给用户的线上购物增添了趣味性,更让用户具有身临其境的沉浸感,购物方式更为灵活且准确,提升转化率。In the method and device provided by the exemplary embodiments of the present disclosure, the display parameters of the three-dimensional model of the virtual item are determined by the entity in the real scene image, so that the user can see exactly how the virtual item fits in the real scene. Give users a more realistic experience, increase the purchase rate, and reduce the return rate caused by the inconsistency of the purchased items with the real usage; further, the human-computer interaction through gestures not only adds fun to the user's online shopping , So that users have a sense of immersion, shopping methods more flexible and accurate, and improve conversion rate.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It should be understood that the above general description and the following detailed description are only exemplary and explanatory, and cannot limit the present disclosure.
附图说明Description of the drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。The drawings herein are incorporated into the specification and constitute a part of the specification, show embodiments in accordance with the disclosure, and together with the specification are used to explain the principle of the disclosure. Obviously, the drawings in the following description are only some embodiments of the present disclosure. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1示意性示出本公开示例性实施例中一种增强现实显示方法的流程图;Fig. 1 schematically shows a flowchart of an augmented reality display method in an exemplary embodiment of the present disclosure;
图2示意性示出本公开实施例中一种增强现实显示方法的部分步骤流程图;Fig. 2 schematically shows a flowchart of partial steps of an augmented reality display method in an embodiment of the present disclosure;
图3示意性示出本公开实施例中一种增强现实显示方法的部分步骤流程图;FIG. 3 schematically shows a flowchart of partial steps of an augmented reality display method in an embodiment of the present disclosure;
图4示意性示出本公开实施例中一种增强现实显示方法的部分步骤流程图;FIG. 4 schematically shows a flowchart of partial steps of an augmented reality display method in an embodiment of the present disclosure;
图5示意性示出本公开实施例中一种增强现实显示方法的部分步骤流程图;FIG. 5 schematically shows a flowchart of partial steps of an augmented reality display method in an embodiment of the present disclosure;
图6示意性示出本公开实施例中一种增强现实显示方法的步骤流程图;FIG. 6 schematically shows a flowchart of steps of an augmented reality display method in an embodiment of the present disclosure;
图7示意性示出本公开实施例中一种增强现实显示方法的步骤流程图;FIG. 7 schematically shows a flowchart of steps of an augmented reality display method in an embodiment of the present disclosure;
图8示意性示出本公开示例性实施例中一种增强现实显示方法的实施情景的示意图;FIG. 8 schematically shows a schematic diagram of an implementation scenario of an augmented reality display method in an exemplary embodiment of the present disclosure;
图9示意性示出本公开示例性实施例中一种增强现实显示装置的结构示意图;FIG. 9 schematically shows a schematic structural diagram of an augmented reality display device in an exemplary embodiment of the present disclosure;
图10示意性示出本公开示例性实施例中一种用于实现增强现实显示方法的电子设备;FIG. 10 schematically shows an electronic device for implementing an augmented reality display method in an exemplary embodiment of the present disclosure;
图11示意性示出本公开示例性实施例中一种用于实现增强现实显示方法的计算机可读存储介质。FIG. 11 schematically shows a computer-readable storage medium for implementing an augmented reality display method in an exemplary embodiment of the present disclosure.
具体实施方式detailed description
现在将参考附图更全面地描述示例性实施方式。然而,示例性实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例性实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。Exemplary embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be implemented in a variety of forms, and should not be construed as being limited to the examples set forth herein; on the contrary, the provision of these embodiments makes the present disclosure more comprehensive and complete, and fully comprehends the concept of the exemplary embodiments. To convey to those skilled in the art. The described features, structures or characteristics may be combined in one or more embodiments in any suitable way.
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。In addition, the drawings are only schematic illustrations of the present disclosure, and are not necessarily drawn to scale. The same reference numerals in the figures denote the same or similar parts, and thus their repeated description will be omitted. Some of the block diagrams shown in the drawings are functional entities and do not necessarily correspond to physically or logically independent entities. These functional entities may be implemented in the form of software, or implemented in one or more hardware modules or integrated circuits, or implemented in different networks and/or processor devices and/or microcontroller devices.
需要说明的是,本公开中,用语“包括”、“配置有”、“设置于”用以表示开放式的包括在内的意思,并且是指除了列出的要素/组成部分/等之外还可存在另外的要素/组成部分/等;用语“第一”、“第二”等仅作为标记使用,不是对其对象数量或次序的限制。It should be noted that in the present disclosure, the terms "include", "configured with", and "provided in" are used to mean open-ended inclusion, and mean in addition to the listed elements/components/etc. There may also be other elements/components/etc.; the terms "first", "second", etc. are used only as marks, and are not limitations on the number or order of their objects.
在本公开的示例性实施例中,首先提供了一种增强现实显示方法,应用于终端设备,举例而言,该终端设备可以是配置有摄像头的电脑或者笔记本、具有增强现实显示的多功能眼镜,也可以是智能终端作为采集和处理侧与具有通讯链路的家庭电视作为显示侧的设备,另外还可以是其他任意具有增强现实显示功能的电子终端设备。In the exemplary embodiment of the present disclosure, an augmented reality display method is first provided, which is applied to a terminal device. For example, the terminal device may be a computer or a notebook equipped with a camera, and multifunctional glasses with augmented reality display. It can also be a device with a smart terminal as the collection and processing side and a home TV with a communication link as the display side, and it can also be any other electronic terminal device with an augmented reality display function.
参考图1中所示,该增强现实显示方法主要可以包括以下步骤:Referring to FIG. 1, the augmented reality display method may mainly include the following steps:
步骤S101.获取采集到的现实场景图像中的实体信息。Step S101. Obtain entity information in the collected real scene image.
在实际应用中,可以使用智能终端的摄像装置采集现实场景图像,该摄像装置可以包括一个摄像头,也可以包括位于不同角度的多个摄像头。当摄像装置只包括一个摄像头的时候,可以进行360°采集,每隔15°采集一个场景, 对该场景的采集距离可以设置为1米、2米、3米、4米和5米等五种距离,采集环境可以设置为白天室内、夜晚室内等两种场景;相应的,当摄像装置包括多个摄像头的时候,只需设置不同的采集距离及采集环境即可。In practical applications, the camera device of the smart terminal can be used to collect real scene images. The camera device may include one camera or multiple cameras located at different angles. When the camera device only includes one camera, it can collect 360°, and collect a scene every 15°. The collection distance of the scene can be set to five types: 1 meter, 2 meters, 3 meters, 4 meters and 5 meters. Distance, the collection environment can be set to two scenes: indoor during the day and indoor at night; correspondingly, when the camera device includes multiple cameras, only different collection distances and collection environments can be set.
针对不同的虚拟物品,需要在采集到的现实场景图像中选择最适宜的实体作为参考。针对该现实场景图像,需要获取其中的实体信息。该实体信息中的实体可以是现实场景图像中包含的对象,对象个数不作限定。针对不同的实体,由于其所处环境不同,所指代的对象不同,例如,若现实场景是家中,实体可以是茶几、椅子、沙发、桌子等;当现实场景是交通工具上,实体可以是座位、显示屏、窗户等;当现实场景是办公室,实体可以是饮水机、办公桌、计算机、电话等;当现实场景是商场里,实体可以是模特、穿衣镜等。那么,实体信息指的就是与实体相关的信息,如尺寸信息、位置信息、色彩信息、形状信息、环境信息等,其中,环境信息是实体所处环境的相关信息,例如实体距离墙壁的距离、所处环境的亮度、与所处环境中的其他实体之间的距离等。For different virtual items, it is necessary to select the most suitable entity as a reference in the collected real scene images. For the real scene image, the entity information needs to be obtained. The entity in the entity information may be an object contained in a real scene image, and the number of objects is not limited. For different entities, due to their different environments, they refer to different objects. For example, if the real scene is at home, the entity can be a coffee table, chair, sofa, table, etc.; when the real scene is a vehicle, the entity can be Seats, display screens, windows, etc.; when the real scene is an office, the entity can be a drinking fountain, desk, computer, telephone, etc.; when the real scene is a shopping mall, the entity can be a model, a full-length mirror, etc. Then, entity information refers to information related to the entity, such as size information, location information, color information, shape information, environmental information, etc., where environmental information is related to the environment in which the entity is located, such as the distance between the entity and the wall, The brightness of the environment, the distance from other entities in the environment, etc.
步骤S102.针对虚拟物品的三维模型,提取三维模型的尺寸信息。Step S102: Extract the size information of the three-dimensional model for the three-dimensional model of the virtual item.
通过步骤S101可以获取采集到的现实场景图像中的实体信息,本步骤获取虚拟物品的三维模型的尺寸信息。一般的,虚拟物品是以3D模型的形式呈现。其中,该虚拟物品可以是线上商店的物品,例如家具、服饰、器材、乐器、配件等。获取物品图像可以通过在展示页面上输入名称获取提供的物品图像,也可以通过利用摄像工具拍摄物品获取物品图像等方式。运用已获取到的物品图像生成对应的虚拟物品的三维模型,并获取该三维模型的尺寸信息。例如,虚拟物品为桌子时,获取桌子的三维模型的特征信息可以是它的长、宽、高等尺寸信息等。针对不同的虚拟物品可以获取不同的尺寸信息,方便后续操作。Through step S101, the entity information in the collected real scene image can be obtained. In this step, the size information of the three-dimensional model of the virtual item can be obtained. Generally, virtual items are presented in the form of 3D models. Wherein, the virtual item may be an item in an online store, such as furniture, clothing, equipment, musical instruments, accessories, etc. The item image can be obtained by entering the name on the display page to obtain the provided item image, or by using a camera tool to capture the item to obtain the item image. Use the acquired image of the item to generate a three-dimensional model of the corresponding virtual item, and obtain size information of the three-dimensional model. For example, when the virtual item is a table, the characteristic information of the three-dimensional model of the table may be its length, width, height, and other dimensions. Different size information can be obtained for different virtual items to facilitate subsequent operations.
步骤S103.根据实体信息以及尺寸信息,确定三维模型的显示参数。Step S103: Determine the display parameters of the three-dimensional model according to the entity information and the size information.
步骤S101中获取到现实场景图像中的实体信息,且步骤S102中获取到虚拟物品的三维模型的尺寸信息,本步骤可以将线上的虚拟物品的三维模型在现实场景图像中显示,通过实体信息及尺寸信息对三维模型的显示参数进行确定。若现实场景图像中的实体为椅子,且已经取得该椅子的实体信息,即其尺寸、形状、位置与颜色等信息,当虚拟物品为桌子时,可以根据已取得的实体信息中的尺寸信息和桌子的尺寸关系确定二者之间的尺寸比例关系确定桌子的显示尺寸,并根据椅子的其他实体信息对桌子的显示位置和适配色彩等显示参数进行确定。The entity information in the real scene image is obtained in step S101, and the size information of the three-dimensional model of the virtual item is obtained in step S102. In this step, the three-dimensional model of the online virtual item can be displayed in the real scene image through the entity information And size information determines the display parameters of the three-dimensional model. If the entity in the real scene image is a chair, and the entity information of the chair has been obtained, that is, its size, shape, position, and color information, when the virtual item is a table, it can be based on the size information and the obtained entity information The size relationship of the table determines the size ratio relationship between the two to determine the display size of the table, and the display parameters such as the display position and adaptation color of the table are determined according to other entity information of the chair.
步骤S104.基于显示参数,在现实场景图像中显示虚拟物品的三维模型图像。Step S104. Based on the display parameters, display the three-dimensional model image of the virtual item in the real scene image.
在步骤S103中已经确定虚拟物品的三维模型的显示参数,本步骤可以将虚拟物品的三维模型图像在现实场景图像中显示。为了在摄像装置拍摄的现实场景图像中介入虚拟物品的三维模型图像,从而使用户能够在现实场景中感觉到虚拟物品的存在,需要对摄像装置的相关参数进行设定,并且要以虚拟物品的三维模型图像为基准,进行三维模型图像与现实场景图像的叠加处理,例如,虚拟物品环绕在现实场景中的实体周围,或者覆盖在实体上,或者放置在实体旁边等。In step S103, the display parameters of the three-dimensional model of the virtual item have been determined. In this step, the three-dimensional model image of the virtual item can be displayed in the real scene image. In order to intervene the three-dimensional model image of the virtual item in the real scene image captured by the camera device, so that the user can feel the existence of the virtual item in the real scene, the relevant parameters of the camera device need to be set, and the virtual item’s The 3D model image is used as a reference, and the 3D model image and the real scene image are superimposed. For example, the virtual item surrounds the entity in the real scene, or covers the entity, or is placed next to the entity.
对现实场景图像中虚拟物品的三维模型图像的显示可以在增强现实显示的眼镜镜片上,也可以是在智能终端的屏幕上,例如手机、电脑、家庭电视的显示屏上。The display of the three-dimensional model image of the virtual item in the real scene image can be on the glasses lens of the augmented reality display, or on the screen of the smart terminal, such as the display screen of a mobile phone, a computer, and a home TV.
根据本示例实施例中的增强现实显示方法,通过现实场景中的实体对需要显示的虚拟物品的三维模型的相关显示信息进行确定,从而确定出最真实的虚拟物品尺寸信息;进一步的,将虚拟商品显示于现实场景中,对于用户确定是否符合预期、评估是否购买具有极大的参考价值,提高了线上购买的成功率,减小了由于无法掌握商品信息带来的退换货情况发生,节约了成本与资源。According to the augmented reality display method in this exemplary embodiment, the relevant display information of the three-dimensional model of the virtual item that needs to be displayed is determined by the entity in the real scene, thereby determining the most realistic virtual item size information; further, the virtual item The product is displayed in the real scene, which is of great reference value for users to determine whether they meet expectations and evaluate whether to purchase, improve the success rate of online purchases, and reduce the occurrence of returns and exchanges due to the inability to grasp product information, saving Cost and resources.
在以上实施例的基础上,如图2所示,获取现实场景图像中的实体信息,包括以下步骤:On the basis of the above embodiment, as shown in FIG. 2, acquiring entity information in a real scene image includes the following steps:
步骤S201.识别现实场景图像中的至少一个实体。Step S201: Identify at least one entity in the real scene image.
在步骤S101可以采集现实场景图像,本步骤可以识别现实场景图像中的实体。对现实场景图像中的实体识别首先可以是对整个现实场景图像进行识别,也可以是对现实场景图像中的某一特定区域进行识别。在识别现实场景图像前,首先应对现实场景图像的预处理实现,预处理方式可以有:降采样、二值化、滤波等。其中,降采样可以用来提高现实场景图像性能,使现实场景图像符合需显示区域大小;二值化可以使选取到的特征点不仅仅依赖于颜色,更加清晰;使用高斯滤波可以降低噪点,使获取到的现实场景图像不受如划痕、灰尘等之类的影响。In step S101, a real scene image can be collected, and in this step, entities in the real scene image can be identified. The entity recognition in the real scene image can firstly be the recognition of the entire real scene image, or it can be the recognition of a specific area in the real scene image. Before recognizing the real scene image, we should first realize the preprocessing of the real scene image. The preprocessing methods can include: downsampling, binarization, filtering, etc. Among them, downsampling can be used to improve the performance of the real scene image, so that the real scene image meets the size of the area to be displayed; binarization can make the selected feature points not only depend on the color, but also more clearly; the use of Gaussian filtering can reduce noise and make The acquired image of the real scene is not affected by scratches, dust, etc.
对现实场景图像中的实体进行识别,对识别到的实体个数不做限定,可以是一个,也可以是多个,识别的实体个数越多,对确定三维模型的显示尺寸和/或缩放比例,以及显示位置的定位更为精确。识别实体的方式可以是对特定颜 色进行识别,也可以是对特定形状进行识别,只要可以识别所需实体即可,本示例性实施例对识别方式不做限定。Recognize entities in real scene images. There is no limit to the number of recognized entities. It can be one or multiple. The more recognized entities, the greater the display size and/or scaling of the 3D model. The ratio and positioning of the display position are more accurate. The way of recognizing an entity may be to recognize a specific color or a specific shape, as long as the required entity can be recognized, and this exemplary embodiment does not limit the recognition mode.
步骤S202.获取实体的边界信息和位置信息。Step S202: Obtain boundary information and location information of the entity.
在步骤S201中已经识别出现实场景图像中的实体,本步骤可以获取已识别的实体的相关信息,该相关信息可以是实体的边界信息和位置信息。其中,边界信息包括实体所处空间的边界对应的信息,位置信息包括实体在现实场景图像中所处位置对应的信息。例如,边界信息可以是实体的长、宽、高等。例如,获取实体的位置信息,可以建立实体在现实场景图像中的三维坐标系,并建立捕获该现实场景图像的摄像装置的摄像坐标系,根据三维坐标系和摄像坐标系之间的相互转化,获取到实体的位置信息。而实体的边界信息可以通过图像识别算法来确定,例如Sobel边缘检测算法、拉普拉斯算子等。In step S201, the entity in the real scene image has been identified. In this step, relevant information of the identified entity can be obtained, and the relevant information may be boundary information and location information of the entity. The boundary information includes information corresponding to the boundary of the space where the entity is located, and the location information includes information corresponding to the location of the entity in the real scene image. For example, the boundary information can be the length, width, and height of the entity. For example, to obtain the location information of an entity, a three-dimensional coordinate system of the entity in the real scene image can be established, and the camera coordinate system of the camera device that captures the real scene image can be established. According to the mutual transformation between the three-dimensional coordinate system and the camera coordinate system, Get the location information of the entity. The boundary information of the entity can be determined by image recognition algorithms, such as Sobel edge detection algorithm, Laplacian operator, etc.
本实施例可以识别现实场景图像中的实体,并确定其相关信息,可以准确掌握现实环境中的实体信息,对于虚拟物品的显示参数的确定具有非常重要的参考意义。This embodiment can identify entities in the real scene image and determine its related information, and can accurately grasp the entity information in the real environment, which has very important reference significance for determining the display parameters of virtual items.
在以上实施例的基础上,如图3所示,获取虚拟物品的三维模型,包括以下步骤:On the basis of the above embodiment, as shown in Fig. 3, obtaining a three-dimensional model of a virtual item includes the following steps:
步骤S301.获取虚拟物品的视频数据。Step S301: Obtain video data of the virtual item.
商家可以使用拍摄终端对虚拟物品拍摄视频,获取视频数据。其中,拍摄终端可以包括监控摄像机、智能终端的前置摄像头、智能终端的后置摄像头等,还可以包括获取视频数据后对其进行处理的处理终端。对于采集到的视频数据,可以以xml、json等数据存储于数据库和服务器当中。Merchants can use shooting terminals to shoot videos of virtual items and obtain video data. Among them, the shooting terminal may include a surveillance camera, a front camera of a smart terminal, a rear camera of a smart terminal, etc., and may also include a processing terminal that processes video data after acquiring it. The collected video data can be stored in the database and server as xml, json and other data.
步骤S302.从视频数据中抽取预设帧数的静态图像。Step S302. Extract a preset number of static images from the video data.
在步骤S301中已经采集到虚拟物品的视频数据,本步骤可以在该视频数据中抽取静态图像。通过设置提取静态图像的帧数间隔,且帧数间隔的起始点可以是前一次获取静态图像时的帧数,来完成抽取预设帧数的静态图像的目的。帧数间隔不应设置过小,否则虚拟物品还未发生变化即提取静态图像,会导致提取的静态图像数量过多,造成资源浪费;帧数间隔不宜设置过大,否则静态图像的提取数量过少,会导致有效信息提取不足,使得用户体验度下降。一般的,为了使虚拟物品的三维模型可以呈现出较为逼真的三维立体感,抽取预设帧数的静态图像可以包括虚拟物品的正面视图、背面视图、左面视图、右面视图、上面视图和下面视图。提取静态图像的方式可以是采用图像压缩标准为JPEG (Joint Photographic Experts Group,联合图像专家组)的编码的方式。In step S301, the video data of the virtual item has been collected. In this step, static images can be extracted from the video data. By setting the frame number interval for extracting the static image, and the starting point of the frame number interval can be the frame number when the static image was acquired last time, the purpose of extracting the static image with the preset number of frames is achieved. The frame number interval should not be set too small, otherwise the virtual item has not changed and the static image is extracted, which will cause the number of extracted static images to be too much, resulting in waste of resources; the frame number interval should not be set too large, otherwise the number of static images extracted will be too large If it is not enough, it will lead to insufficient extraction of effective information and reduce user experience. Generally, in order to make the three-dimensional model of the virtual item present a more realistic three-dimensional sense, the static images of the preset number of frames can include the front view, back view, left view, right view, top view and bottom view of the virtual item. . The method of extracting static images may be a coding method using the image compression standard of JPEG (Joint Photographic Experts Group, Joint Photographic Experts Group).
步骤S303.利用静态图像生成虚拟物品的三维模型。Step S303: Use the static image to generate a three-dimensional model of the virtual item.
在步骤S302中可以抽取到虚拟物品的静态图像若干张,本步骤将利用静态图像生成该虚拟物品的三维模型。通过静态图像建立三维模型的技术,可以使用智能设备对采集到的静态图像进行图形处理以及三维计算,从而自动生成虚拟物品的三维模型。In step S302, several static images of the virtual item can be extracted. In this step, the static images will be used to generate a three-dimensional model of the virtual item. The technology of building a three-dimensional model through static images can use smart devices to perform graphic processing and three-dimensional calculations on the collected static images, thereby automatically generating a three-dimensional model of virtual objects.
本实施例对虚拟物品的三维模型的生成进行说明,通过视频数据获取静态图像,从而构建所需的三维模型,生成方式简单,操作方便,由实际静态图像构建的三维模型更加逼真,提高了商品观感,提升了用户的购物体验。This embodiment describes the generation of a three-dimensional model of a virtual item. The static image is obtained through video data to construct the required three-dimensional model. The generation method is simple and the operation is convenient. The three-dimensional model constructed from the actual static image is more realistic and the product is improved. The look and feel improves the user’s shopping experience.
在以上实施例的基础上,如图4所示,确定三维模型的显示参数,包括以下步骤:On the basis of the above embodiments, as shown in Fig. 4, determining the display parameters of the three-dimensional model includes the following steps:
步骤S401.确定三维模型的显示尺寸和缩放比例。Step S401: Determine the display size and zoom ratio of the three-dimensional model.
根据现实场景图像中的实体,可以对三维模型的显示参数进行确定,以使其不仅可以反映虚拟物品的真实信息,而且同时可以反映其在环境的最恰当和适宜的尺寸,用户可以直观明确地知道虚拟物品是否与用户的真实使用环境相匹配。例如,若三维模型是一桌子,可以根据现实场景图像中摆放区域的凳子或者沙发的尺寸以及比例,来确定适宜摆放区域中的桌子的显示尺寸和缩放比例,显示尺寸是指在现实场景图像中显示三维模型的尺寸,如桌子的高度、长度、宽度等,缩放比例是指根据现实场景图像需要将三维模型缩小或放大的比例,如缩小至80%、放大至120%。According to the entity in the real scene image, the display parameters of the 3D model can be determined so that it can not only reflect the real information of the virtual item, but also reflect its most appropriate and suitable size in the environment. The user can intuitively and clearly Know whether the virtual item matches the user's real use environment. For example, if the three-dimensional model is a table, the display size and zoom ratio of the table in the appropriate placement area can be determined according to the size and ratio of the stool or sofa in the actual scene image. The display size refers to the actual scene The size of the 3D model is displayed in the image, such as the height, length, and width of the table. The zoom ratio refers to the proportion of the 3D model that needs to be reduced or enlarged according to the actual scene image, such as reduced to 80% and enlarged to 120%.
步骤S402.确定三维模型的显示位置。Step S402: Determine the display position of the three-dimensional model.
在步骤S401中可以确定三维模型的显示尺寸和缩放比例,本步骤可以对该三维模型的显示位置进行确定,显示位置是指三维模型在现实场景图像中所处的位置。三维模型的显示位置应该是现实场景图像中的摆放位置、使用位置等最适宜的位置。例如,当虚拟物品为桌子时,其显示位置应该在凳子或者沙发的正前方,并保持适宜距离。In step S401, the display size and zoom ratio of the three-dimensional model can be determined. In this step, the display position of the three-dimensional model can be determined. The display position refers to the position of the three-dimensional model in the real scene image. The display position of the 3D model should be the most suitable position such as the placement position and the use position in the real scene image. For example, when the virtual item is a table, its display position should be directly in front of a stool or sofa, and keep an appropriate distance.
对应于现实场景图像中的实体信息,本实施例获取三维模型的尺寸信息,以二者的比例关系,确定三维模型的最佳显示效果对应的显示尺寸和缩放比例,以及显示位置,避免由于虚拟物品过大或过小,以及摆放位置不佳带来的突兀感。Corresponding to the entity information in the real scene image, this embodiment obtains the size information of the three-dimensional model, and uses the proportional relationship between the two to determine the display size and zoom ratio corresponding to the best display effect of the three-dimensional model, as well as the display position, so as to avoid virtual Objects are too big or too small, and the abrupt feeling caused by poor placement.
在以上实施例的基础上,增强现实显示方法还包括:响应于针对虚拟物品 的交互指令,在现实场景图像中显示经过处理后的虚拟物品的三维模型图像。On the basis of the above embodiment, the augmented reality display method further includes: in response to an interactive instruction for the virtual item, displaying a processed three-dimensional model image of the virtual item in the real scene image.
当虚拟物品的三维模型置于现实场景图像中时,无法确定一开始的默认色彩为最适配颜色,也无法确定一开始的显示面即为最佳显示效果的一面,所以需要对虚拟物品的三维模型图像进行调整。当用户实施一手势动作时,相当于发送了一针对虚拟物品的交互指令,该交互指令可以完成对三维模型的缩放、旋转、切换等动作。当用户已选择好该虚拟物品的最佳显示效果,亦即完成了在现实场景图像中与虚拟物品的三维模型图像的交互过程。When the three-dimensional model of the virtual item is placed in the real scene image, it is impossible to determine that the default color at the beginning is the most suitable color, nor can it be determined that the initial display surface is the side with the best display effect. The 3D model image is adjusted. When the user implements a gesture action, it is equivalent to sending an interactive instruction for the virtual item, which can complete actions such as zooming, rotating, and switching the three-dimensional model. When the user has selected the best display effect of the virtual item, the interaction process with the three-dimensional model image of the virtual item in the real scene image is completed.
在以上实施例的基础上,上述响应于针对虚拟物品的交互指令,在现实场景图像中显示经过处理后的虚拟物品的三维模型图像包括:对虚拟物品的三维模型进行处理,确定经过处理后的虚拟物品的三维模型的显示参数;基于经过处理后的虚拟物品的三维模型的显示参数,在现实场景图像中显示经过处理后的虚拟物品的三维模型图像;其中,上述处理包括以下至少一项:颜色处理、位置处理、显示尺寸处理以及缩放比例处理,其中,颜色处理是指调整三维模型的色彩,位置处理是指调整三维模型的显示位置,显示尺寸处理是指调整三维模型的显示尺寸,缩放比例处理是指调整三维模型的缩放比例。On the basis of the above embodiments, in response to the interactive instruction for the virtual item, displaying the processed three-dimensional model image of the virtual item in the real scene image includes: processing the three-dimensional model of the virtual item and determining the processed three-dimensional model image Display parameters of the three-dimensional model of the virtual item; based on the display parameters of the processed three-dimensional model of the virtual item, display the processed three-dimensional model image of the virtual item in the real scene image; wherein the above processing includes at least one of the following: Color processing, position processing, display size processing, and scaling processing. Among them, color processing refers to adjusting the color of the 3D model, position processing refers to adjusting the display position of the 3D model, and display size processing refers to adjusting the display size of the 3D model, zooming Scaling refers to adjusting the zoom ratio of the 3D model.
本实施例通过一交互指令使得用户可以在现实场景图像中与虚拟物品的三维模型进行交互,并显示交互后的三维模型图像,既可以保证用户的真是代入感,又可以保证不同用户的个性化选择,还可以使虚拟物品以最佳的显示状态置于现实场景中,保证了线上购物的最优显示效果。In this embodiment, through an interactive instruction, the user can interact with the 3D model of the virtual item in the real scene image, and display the 3D model image after the interaction, which can ensure the user's sense of real substitution and the personalization of different users. Optionally, the virtual items can be placed in the real scene in the best display state, ensuring the best display effect of online shopping.
在以上实施例的基础上,增强现实显示方法还包括:针对用户的手势动作生成针对虚拟物品的交互指令。On the basis of the above embodiment, the augmented reality display method further includes: generating an interactive instruction for the virtual item in response to the user's gesture action.
在获取用户的手势动作时,可以先采集用户的手部动态图像。通过智能终端的摄像装置或者增强现实设备上的深度相机来获取用户的手部动态图像,并对该手部动态图像进行解析。获取用户手部的实时手部信息,可以包括有效手势动作和无效手势动作,有效手势动作是指预设手势动作。该增强现实装置中应当至少设置一个预设手势动作,例如,大拇指和食指闭合、食指旋转、食指翻动等其他手势动作,但不局限于使用特定手指或特定动作。一个预设手势动作设置有相应的操作,例如对虚拟物品进行缩放、旋转、切换等。当获取到的用户的手势动作时,可以将该手势动作与预设手势动作进行匹配,若匹配成功,则该手势动作可以执行相应操作,若匹配不成功,则该手势动作被视为无效手势动作。用户在与虚拟物品的三维模型进行交互,需要对其进行例如缩放、旋 转、切换等调整时,可以通过已执行的手势操作发送一交互指令进行相关操作。When acquiring the user's gesture action, the user's hand dynamic image can be collected first. The dynamic image of the user's hand is acquired through the camera device of the smart terminal or the depth camera on the augmented reality device, and the dynamic image of the hand is analyzed. Acquiring real-time hand information of the user's hand may include valid gesture actions and invalid gesture actions, and valid gesture actions refer to preset gesture actions. At least one preset gesture action should be set in the augmented reality device, for example, thumb and index finger closing, index finger rotation, index finger flipping and other gesture actions, but it is not limited to using specific fingers or specific actions. A preset gesture action is set with corresponding operations, such as zooming, rotating, and switching virtual items. When the user's gesture action is obtained, the gesture action can be matched with the preset gesture action. If the matching is successful, the gesture action can perform the corresponding operation. If the matching is unsuccessful, the gesture action is regarded as an invalid gesture action. When the user interacts with the three-dimensional model of the virtual item and needs to adjust it, such as zooming, rotating, switching, etc., he can send an interactive instruction to perform related operations through the executed gesture operation.
本实施例通过用户的手势动作,对虚拟物品发出一交互指令对虚拟物品执行操作,一方面可以提高用户购物时的流畅性及准确性,方便用户对三维模型的显示状态进行调整,以达到想要的显示效果;另一方面可以打破传统线上购物方式的用户与商品之间无法互动交流的壁垒,提高用户在线上购物体验的沉浸感。This embodiment uses the user’s gestures to issue an interactive instruction to the virtual item to perform operations on the virtual item. On the one hand, it can improve the fluency and accuracy of the user’s shopping, and facilitate the user to adjust the display state of the three-dimensional model to achieve the desired value. On the other hand, it can break the barriers that cannot interact and communicate between users and products in traditional online shopping methods, and improve the immersion of users' online shopping experience.
在以上实施例的基础上,如图5所示,针对用户的手势动作生成针对所述虚拟物品的交互指令,可以包括以下步骤:On the basis of the above embodiment, as shown in FIG. 5, generating an interactive instruction for the virtual item in response to a user's gesture action may include the following steps:
步骤S501.当判断手势动作为第一手势时,生成用于移动虚拟物品的第一交互指令。Step S501. When it is determined that the gesture action is the first gesture, a first interaction instruction for moving the virtual item is generated.
对于已经获取到用户的手势动作,本步骤列出当判断获取到的手势动作为第一手势时的情景。当获取到的手势动作是一种预设动作且为第一手势时,该第一手势可以是食指翻转,当出现该手势动作时,相当于触发了第一交互指令,该交互指令对应移动虚拟物品的操作。For the user's gesture action that has been acquired, this step lists scenarios when it is determined that the acquired gesture action is the first gesture. When the acquired gesture action is a preset action and is the first gesture, the first gesture may be the index finger flip. When the gesture action occurs, it is equivalent to triggering the first interactive instruction, which corresponds to moving the virtual The manipulation of items.
步骤S502.当判断手势动作为第二手势时,生成用于缩放虚拟物品的第二交互指令。Step S502. When it is determined that the gesture action is the second gesture, a second interaction instruction for zooming the virtual item is generated.
在步骤S501中已经获取到用户的手势动作,本步骤列出当判断获取到的手势动作为第二手势时的情景。当获取到的手势动作是一种预设动作且为第二手势时,该第二手势可以是大拇指和食指的闭合。只要出现该手势动作,相当于触发了第二交互指令,该交互指令对应缩放虚拟物品的操作。其中,第二交互指令可以包括两种情况:当大拇指和食指之间的距离是从小变大时,可以是将虚拟物品缩小;当大拇指和食指之间的距离是由大变小时,可以是把虚拟物品放大。或者,第二交互指令可以包括两种情况:当大拇指和食指之间的距离是从大变小时,可以是将虚拟物品缩小;当大拇指和食指之间的距离是由大变小时,可以是把虚拟物品放大。In step S501, the user's gesture action has been acquired, and this step lists scenarios when it is determined that the acquired gesture action is the second gesture. When the acquired gesture action is a preset action and is a second gesture, the second gesture may be the closing of the thumb and index finger. As long as the gesture action occurs, it is equivalent to triggering the second interactive instruction, which corresponds to the operation of zooming the virtual item. Among them, the second interactive instruction can include two situations: when the distance between the thumb and the index finger changes from small to large, the virtual item can be reduced; when the distance between the thumb and index finger changes from large to small, it can be It is to enlarge the virtual item. Or, the second interactive instruction can include two situations: when the distance between the thumb and index finger is changed from large to small, the virtual item can be reduced; when the distance between the thumb and index finger is changed from large to small, it can be It is to enlarge the virtual item.
本实施例列出了两种手势动作对应的调整虚拟物品的显示状态,对用户的交互方式进行说明,使用户在购物时可以更准确得对虚拟物品进行操作,虚拟物品更加逼真,购物方式更为灵活。This embodiment lists the adjustment of the display state of the virtual items corresponding to the two gesture actions, and explains the user's interaction mode, so that the user can operate the virtual items more accurately when shopping, and the virtual items are more realistic and the shopping mode is better. For flexibility.
需要说明的是,虽然以上示例性实施例的实施方式以特定顺序描述了本公开中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或者必须执行全部的步骤才能实现期望的结果。附加地或者备选地, 可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。It should be noted that although the above exemplary embodiments describe the steps of the method in the present disclosure in a specific order, this does not require or imply that these steps must be performed in the specific order, or that all steps must be performed. In order to achieve the desired result. Additionally or alternatively, some steps may be omitted, multiple steps may be combined into one step for execution, and/or one step may be decomposed into multiple steps for execution, etc.
在以上实施例的基础上,图6示意性示出了本公开增强现实显示方法的一个实施环境,该实施环境包括:服务器10和终端设备20。On the basis of the above embodiments, FIG. 6 schematically shows an implementation environment of the augmented reality display method of the present disclosure, and the implementation environment includes: a server 10 and a terminal device 20.
服务器10是指具备运算能力和存储能力的设备,可选地,服务器10可以是一台服务器,也可以是由多台服务器组成的服务器集群,或者是一个云计算中心。本公开实施例中,商家30可以对商品进行拍摄,可选地,商家30对商品进行360度全方位拍摄,然后商家30将拍摄得到的视频数据上传到服务器10,服务器10接收到该视频数据后,对该视频数据进行处理,得到三维模型。The server 10 refers to a device with computing capability and storage capability. Optionally, the server 10 may be a server, a server cluster composed of multiple servers, or a cloud computing center. In the embodiment of the present disclosure, the merchant 30 can shoot the product. Optionally, the merchant 30 can shoot the product in 360 degrees, and then the merchant 30 uploads the captured video data to the server 10, and the server 10 receives the video data Then, the video data is processed to obtain a three-dimensional model.
终端设备20是指具备数据获取能力和图像处理能力的设备,可选地,终端设备20包括手机、平板电脑、多媒体播放设备、可穿戴设备等电子终端。本公开实施例中,用户40可以在终端设备20中查看商家30的商品,当用户40确定显示某一商品时,终端设备20收到用户40的请求,并根据该请求从服务器10中获取该商品对应的三维模型。终端设备20可以对现实场景图像进行采集,并获取该现实场景图像中实体的边界信息和位置信息,然后合成现实场景图像和上述三维模型。用户40还可以在终端设备20中对三维模型进行调整,如缩放、选定位置等,终端设备20根据用户40的操作在现实场景图像中对三维模型进行调整,并显示调整后的合成图像。The terminal device 20 refers to a device with data acquisition capabilities and image processing capabilities. Optionally, the terminal device 20 includes electronic terminals such as mobile phones, tablet computers, multimedia playback devices, and wearable devices. In the embodiment of the present disclosure, the user 40 can view the merchandise of the merchant 30 in the terminal device 20. When the user 40 determines to display a merchandise, the terminal device 20 receives the request of the user 40 and obtains the product from the server 10 according to the request. The three-dimensional model corresponding to the product. The terminal device 20 may collect a real scene image, and obtain boundary information and position information of entities in the real scene image, and then synthesize the real scene image and the above-mentioned three-dimensional model. The user 40 can also adjust the three-dimensional model in the terminal device 20, such as zooming, selecting a position, etc. The terminal device 20 adjusts the three-dimensional model in the real scene image according to the operation of the user 40, and displays the adjusted composite image.
本公开实施例中,终端设备20与服务器10之间可以通过网络进行通信,可选地,该网络可以是有线网络,也可以是无线网络。In the embodiment of the present disclosure, the terminal device 20 and the server 10 may communicate through a network. Optionally, the network may be a wired network or a wireless network.
下面,以一个具体的示例对本公开的方法进行介绍说明。Below, a specific example is used to introduce and explain the method of the present disclosure.
如图7所示,假设虚拟物品为虚拟台灯,则该虚拟台灯的三维模型的生成过程如下:As shown in Figure 7, assuming that the virtual item is a virtual desk lamp, the generation process of the three-dimensional model of the virtual desk lamp is as follows:
步骤910,商家拍摄实体台灯;商家是指出售该实体台灯的卖家,实体台灯是指在现实物理世界中显示的台灯;Step 910, the merchant photographs the physical desk lamp; the merchant refers to the seller who sells the physical desk lamp, and the physical desk lamp refers to the desk lamp displayed in the real physical world;
步骤920,商家将拍摄得到的视频数据上传至服务器;其中,该视频数据中包括虚拟台灯,该虚拟台灯与上述实体台灯相对,是指在服务器或者其它设备所处理的视频数据中的台灯;Step 920: The merchant uploads the video data obtained by shooting to the server; where the video data includes a virtual desk lamp, which is opposite to the above-mentioned physical desk lamp and refers to the desk lamp in the video data processed by the server or other equipment;
步骤930,服务器根据视频数据,生成虚拟台灯的三维模型;服务器获取到视频数据后,可以从视频数据中抽取预设帧数的静态图像,然后根据该静态图像生成虚拟台灯的三维模型,例如,该虚拟台灯的三维模型如图8中的1010所 示;In step 930, the server generates a three-dimensional model of the virtual table lamp according to the video data; after the server obtains the video data, it can extract static images of a preset number of frames from the video data, and then generates a three-dimensional model of the virtual table lamp according to the static images, for example, The three-dimensional model of the virtual desk lamp is shown as 1010 in Figure 8;
假设现实场景图像为包括书桌的房间,则用户可以通过终端设备体验上述虚拟台灯在房间的书桌上的显示效果,该虚拟台灯在该书桌上的显示方法如下:Assuming that the real scene image is a room that includes a desk, the user can experience the display effect of the above-mentioned virtual desk lamp on the desk of the room through the terminal device. The display method of the virtual desk lamp on the desk is as follows:
步骤940,用户确定显示虚拟台灯;用户可以从商户售卖的商品中选取虚拟台灯,并确定在终端设备中体验该虚拟台灯在现实场景图像中的显示效果;Step 940: The user determines to display the virtual table lamp; the user can select the virtual table lamp from the merchandise sold by the merchant, and determine to experience the display effect of the virtual table lamp in the real scene image in the terminal device;
步骤950,终端设备获取虚拟台灯的三维模型;终端设备响应于用户的选择,从服务器处获取虚拟台灯的三维模型;Step 950, the terminal device obtains the three-dimensional model of the virtual table lamp; the terminal device obtains the three-dimensional model of the virtual table lamp from the server in response to the user's selection;
步骤960,终端设备识别现实场景图像中的至少一个实体,并确定各个实体的边界信息和位置信息;现实场景图像可以是用户使用终端设备拍摄的图像,该现实场景图像中可以包括至少一个实体,例如,如图8所示,现实场景图像是房间1020,该现实场景图像中包括书桌1030,该书桌1030即为实体,终端设备可以确定该书桌的边界信息和位置信息;Step 960: The terminal device recognizes at least one entity in the real scene image, and determines the boundary information and position information of each entity; the real scene image may be an image taken by the user using the terminal device, and the real scene image may include at least one entity, For example, as shown in FIG. 8, the real scene image is a room 1020, and the real scene image includes a desk 1030. The desk 1030 is an entity, and the terminal device can determine the boundary information and location information of the desk;
步骤970,终端设备根据实体的边界信息和位置信息,确定三维模型的显示尺寸、缩放比例和显示位置,并在现实场景图像中显示三维模型;终端设备根据虚拟台灯的三维模型,和现实场景图像中实体的边界信息和位置信息,可以计算出三维模型的显示尺寸、缩放比例和显示位置,这些参数的值是终端计算出的初始默认值,然后终端根据这些参数在现实场景图像中显示三维模型,例如,如图8所示,终端设备根据现实场景图像1020中书桌1030和三维模型1010,在现实场景图像1020中显示三维模型1010;Step 970: The terminal device determines the display size, zoom ratio and display position of the 3D model according to the boundary information and location information of the entity, and displays the 3D model in the real scene image; the terminal device according to the 3D model of the virtual desk lamp and the real scene image The boundary information and position information of the entity in the middle can calculate the display size, zoom ratio and display position of the 3D model. The values of these parameters are the initial default values calculated by the terminal, and then the terminal displays the 3D model in the real scene image according to these parameters. For example, as shown in FIG. 8, the terminal device displays the three-dimensional model 1010 in the real scene image 1020 according to the desk 1030 and the three-dimensional model 1010 in the real scene image 1020;
步骤980,用户对现实场景图像中的三维模型进行调整;终端设备计算出的三维模型的显示位置等可能不是最佳的显示,用户可以根据自己个性化的需求来对现实场景图像中的三维模型进行调整;In step 980, the user adjusts the 3D model in the real scene image; the display position of the 3D model calculated by the terminal device may not be the best display, and the user can compare the 3D model in the real scene image according to their individual needs Make adjustments
步骤990,终端设备根据用户的调整,在现实场景图像中显示调整后的三维模型;例如,如图8所示,假设用户的调整是将三维模型1010放大,并往左移动,则终端设备根据用户的调整,在现实场景图像1020中显示调整后的三维模型1010。In step 990, the terminal device displays the adjusted three-dimensional model in the real scene image according to the user's adjustment; for example, as shown in FIG. 8, assuming that the user's adjustment is to enlarge the three-dimensional model 1010 and move it to the left, the terminal device The adjustment of the user displays the adjusted three-dimensional model 1010 in the real scene image 1020.
此外,在本公开的示例实施例中,还提供了一种增强现实显示装置。参照图9所示,增强现实显示装置600可以包括:获取信息模块601、提取信息模块602、参数确定模块603、图像显示模块604。其中:In addition, in an exemplary embodiment of the present disclosure, an augmented reality display device is also provided. Referring to FIG. 9, the augmented reality display device 600 may include: an information acquisition module 601, an information extraction module 602, a parameter determination module 603, and an image display module 604. among them:
获取信息模块601,被配置为获取采集到的现实场景图像中的实体信息;提 取信息模块602,被配置为针对虚拟物品的三维模型,提取所述三维模型的尺寸信息;参数确定模块603,被配置为根据所述实体信息以及所述尺寸信息,确定所述三维模型的显示参数;图像显示模块604,被配置基于所述显示参数,在所述现实场景图像中显示所述虚拟物品的三维模型图像。The information acquisition module 601 is configured to acquire entity information in the collected real scene images; the information extraction module 602 is configured to extract size information of the three-dimensional model for the three-dimensional model of the virtual item; the parameter determination module 603 is configured to Configured to determine the display parameters of the three-dimensional model according to the entity information and the size information; the image display module 604 is configured to display the three-dimensional model of the virtual item in the real scene image based on the display parameters image.
上述增强现实显示装置的具体细节已经在对应的增强现实显示方法中进行了详细的描述,因此此处不再赘述。The specific details of the above-mentioned augmented reality display device have been described in detail in the corresponding augmented reality display method, and therefore will not be repeated here.
应当注意,尽管在上文详细描述中提及了增强现实显示装置600的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that although several modules or units of the augmented reality display device 600 are mentioned in the above detailed description, this division is not mandatory. In fact, according to the embodiments of the present disclosure, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
此外,在本公开的示例性实施例中,还提供了一种能够实现上述方法的电子设备。In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
下面参照图10来描述根据本发明的这种实施例的电子设备700。图10显示的电子设备700仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。The electronic device 700 according to this embodiment of the present invention will be described below with reference to FIG. 10. The electronic device 700 shown in FIG. 10 is only an example, and should not bring any limitation to the function and application scope of the embodiment of the present invention.
如图10所示,电子设备700以通用计算设备的形式表现。电子设备700的组件可以包括但不限于:上述至少一个处理单元710、上述至少一个存储单元720、连接不同系统组件(包括存储单元720和处理单元710)的总线730、显示单元740。As shown in FIG. 10, the electronic device 700 is represented in the form of a general-purpose computing device. The components of the electronic device 700 may include, but are not limited to: the aforementioned at least one processing unit 710, the aforementioned at least one storage unit 720, a bus 730 connecting different system components (including the storage unit 720 and the processing unit 710), and a display unit 740.
其中,所述存储单元存储有程序代码,所述程序代码可以被所述处理单元710执行,使得所述处理单元710执行本说明书上述“示例性方法”部分中描述的根据本发明各种示例性实施例的步骤。Wherein, the storage unit stores program code, and the program code can be executed by the processing unit 710, so that the processing unit 710 executes the various exemplary methods described in the "exemplary method" section of this specification. Example steps.
存储单元720可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)721和/或高速缓存存储单元722,还可以进一步包括只读存储单元(ROM)723。The storage unit 720 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 721 and/or a cache storage unit 722, and may further include a read-only storage unit (ROM) 723.
存储单元720还可以包括具有一组(至少一个)程序模块725的程序/实用工具724,这样的程序模块725包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。The storage unit 720 may also include a program/utility tool 724 having a set of (at least one) program module 725. Such program module 725 includes but is not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples or some combination may include the implementation of a network environment.
总线730可以为表示几类总线结构中的一种或多种,包括存储单元总线或 者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。The bus 730 may represent one or more of several types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any bus structure among multiple bus structures. bus.
电子设备700也可以与一个或多个外部设备900(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该电子设备700交互的设备通信,和/或与使得该电子设备700能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(Input/Output,I/O)接口750进行。并且,电子设备700还可以通过网络适配器760与一个或者多个网络(例如局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器740通过总线730与电子设备700的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备700使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。The electronic device 700 can also communicate with one or more external devices 900 (such as keyboards, pointing devices, Bluetooth devices, etc.), and can also communicate with one or more devices that enable users to interact with the electronic device 700, and/or communicate with Any device (such as a router, modem, etc.) that enables the electronic device 700 to communicate with one or more other computing devices. This communication can be performed through an input/output (Input/Output, I/O) interface 750. In addition, the electronic device 700 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 760. As shown in the figure, the network adapter 740 communicates with other modules of the electronic device 700 through the bus 730. It should be understood that although not shown in the figure, other hardware and/or software modules can be used in conjunction with the electronic device 700, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives And data backup storage system, etc.
通过以上的实施例的描述,本领域的技术人员易于理解,这里描述的示例实施例可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施例的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM(Compact Disc Read-Only Memory,只读光盘),U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本公开实施例的方法。Through the description of the above embodiments, those skilled in the art can easily understand that the exemplary embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, and the software product can be stored in a non-volatile storage medium (which can be a CD-ROM (Compact Disc Read-Only Memory)) , U disk, mobile hard disk, etc.) or on the network, including several instructions to make a computing device (may be a personal computer, server, terminal device, or network device, etc.) execute the method according to the embodiments of the present disclosure.
在本公开的示例性实施例中,还提供了一种计算机可读存储介质,其上存储有能够实现本说明书上述方法的程序产品。在一些可能的实施例中,本发明的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在终端设备上运行时,所述程序代码用于使所述终端设备执行本说明书上述“示例性方法”部分中描述的根据本发明各种示例性实施例的步骤。In the exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium on which is stored a program product capable of implementing the above method in this specification. In some possible embodiments, various aspects of the present invention may also be implemented in the form of a program product, which includes program code, and when the program product runs on a terminal device, the program code is used to make the The terminal device executes the steps according to various exemplary embodiments of the present invention described in the above "Exemplary Method" section of this specification.
参考图11所示,描述了根据本发明的实施例的用于实现上述方法的程序产品800,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本发明的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。As shown in FIG. 11, a program product 800 for implementing the above method according to an embodiment of the present invention is described. It may adopt a portable compact disk read-only memory (CD-ROM) and include program code, and may be in a terminal device, For example, running on a personal computer. However, the program product of the present invention is not limited thereto. In this document, the readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, device, or device.
所述程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是 可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(EPROM(Electrically Programmable Read-Only Memory)或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。The program product can use any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (Read Only Memory, ROM), Erasable Programmable Read-Only Memory (EPROM (Electrically Programmable Read-Only Memory) or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, Or any suitable combination of the above.
计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。The computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. The readable signal medium may also be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device.
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF(Radio Frequency,射频)等等,或者上述的任意合适的组合。The program code contained on the readable medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF (Radio Frequency), etc., or any suitable combination of the foregoing.
可以以一种或多种程序设计语言的任意组合来编写用于执行本发明操作的程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。The program code used to perform the operations of the present invention can be written in any combination of one or more programming languages. The programming languages include object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural styles. Programming language-such as "C" language or similar programming language. The program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on. In the case of a remote computing device, the remote computing device can be connected to a user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or can be connected to an external computing device (for example, using Internet service providers) Business to connect via the Internet).
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施例。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。Those skilled in the art will easily think of other embodiments of the present disclosure after considering the specification and practicing the invention disclosed herein. This application is intended to cover any variations, uses, or adaptive changes of the present disclosure, which follow the general principles of the present disclosure and include common knowledge or conventional technical means in the technical field not disclosed in the present disclosure . The description and embodiments are only regarded as exemplary, and the true scope and spirit of the present disclosure are pointed out by the claims.

Claims (11)

  1. 一种增强现实显示方法,其特征在于,包括:An augmented reality display method, characterized in that it comprises:
    获取采集到的现实场景图像中的实体信息;Obtain entity information in the collected real scene images;
    针对虚拟物品的三维模型,提取所述三维模型的尺寸信息;For the three-dimensional model of the virtual item, extract the size information of the three-dimensional model;
    根据所述实体信息以及所述尺寸信息,确定所述三维模型的显示参数;Determine the display parameters of the three-dimensional model according to the entity information and the size information;
    基于所述显示参数,在所述现实场景图像中显示所述虚拟物品的三维模型图像。Based on the display parameters, a three-dimensional model image of the virtual item is displayed in the real scene image.
  2. 根据权利要求1所述的增强现实显示方法,其特征在于,所述获取采集到的现实场景图像中的实体信息,包括:The augmented reality display method according to claim 1, wherein the acquiring entity information in the collected real scene image comprises:
    识别所述现实场景图像中的至少一个实体;Identifying at least one entity in the real scene image;
    获取所述实体的边界信息和位置信息。Obtain boundary information and location information of the entity.
  3. 根据权利要求1所述的增强现实显示方法,其特征在于,所述确定所述三维模型的显示参数,包括:The augmented reality display method according to claim 1, wherein the determining the display parameters of the three-dimensional model comprises:
    确定所述三维模型的显示尺寸和缩放比例;Determining the display size and zoom ratio of the three-dimensional model;
    确定所述三维模型的显示位置。Determine the display position of the three-dimensional model.
  4. 根据权利要求1所述的增强现实显示方法,其特征在于,所述方法还包括:The augmented reality display method according to claim 1, wherein the method further comprises:
    响应于针对所述虚拟物品的交互指令,在所述现实场景图像中显示经过处理后的所述虚拟物品的三维模型图像。In response to the interactive instruction for the virtual item, the processed three-dimensional model image of the virtual item is displayed in the real scene image.
  5. 根据权利要求4所述的增强现实显示方法,其特征在于,所述在所述现实场景图像中显示经过处理后的所述虚拟物品的三维模型图像,包括:The augmented reality display method according to claim 4, wherein the displaying the processed three-dimensional model image of the virtual item in the real scene image comprises:
    对所述虚拟物品的三维模型进行处理,确定经过处理后的所述虚拟物品的三维模型的显示参数,所述处理包括以下至少一项:颜色处理、位置处理、显示尺寸处理以及缩放比例处理;Processing the three-dimensional model of the virtual item to determine display parameters of the processed three-dimensional model of the virtual item, the processing includes at least one of the following: color processing, position processing, display size processing, and scaling processing;
    基于经过处理后的所述虚拟物品的三维模型的显示参数,在所述现实场景图像中显示经过处理后的所述虚拟物品的三维模型图像。Based on the processed display parameters of the three-dimensional model of the virtual item, displaying the processed three-dimensional model image of the virtual item in the real scene image.
  6. 根据权利要求4所述的增强现实显示方法,其特征在于,所述方法还包括:The augmented reality display method of claim 4, wherein the method further comprises:
    针对用户的手势动作生成针对所述虚拟物品的交互指令。An interactive instruction for the virtual item is generated for the user's gesture action.
  7. 根据权利要求6所述的增强现实显示方法,其特征在于,所述针对用户 的手势动作生成针对所述虚拟物品的交互指令,包括:The augmented reality display method according to claim 6, wherein said generating an interactive instruction for said virtual item for said gesture action of a user comprises:
    当判断所述手势动作为第一手势时,生成用于移动所述虚拟物品的第一交互指令;When it is determined that the gesture action is the first gesture, generate a first interaction instruction for moving the virtual item;
    当判断所述手势动作为第二手势时,生成用于缩放所述虚拟物品的第二交互指令。When it is determined that the gesture action is a second gesture, a second interaction instruction for zooming the virtual item is generated.
  8. 根据权利要求1至7任一项所述的增强现实显示方法,其特征在于,所述虚拟物品的三维模型通过如下步骤生成:The augmented reality display method according to any one of claims 1 to 7, wherein the three-dimensional model of the virtual item is generated by the following steps:
    获取所述虚拟物品的视频数据;Acquiring video data of the virtual item;
    从所述视频数据中抽取预设帧数的静态图像;Extracting a preset number of static images from the video data;
    利用所述静态图像生成所述虚拟物品的三维模型。The static image is used to generate a three-dimensional model of the virtual item.
  9. 一种增强现实显示装置,其特征在于,包括:An augmented reality display device is characterized by comprising:
    获取信息模块,被配置为获取采集到的现实场景图像中的实体信息;The information acquisition module is configured to acquire entity information in the collected real scene images;
    提取信息模块,被配置为针对虚拟物品的三维模型,提取所述三维模型的尺寸信息;An information extraction module configured to extract size information of the three-dimensional model of the virtual item;
    参数确定模块,被配置为根据所述实体信息以及所述尺寸信息,确定所述三维模型的显示参数;A parameter determination module configured to determine the display parameters of the three-dimensional model according to the entity information and the size information;
    图像显示模块,被配置基于所述显示参数,在所述现实场景图像中显示所述虚拟物品的三维模型图像。The image display module is configured to display the three-dimensional model image of the virtual item in the real scene image based on the display parameter.
  10. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    处理器;以及Processor; and
    存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现根据权利要求1至8中任一项所述的方法。A memory on which computer-readable instructions are stored, and when the computer-readable instructions are executed by the processor, the method according to any one of claims 1 to 8 is implemented.
  11. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现根据权利要求1至8中任一项所述方法。A computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, implements the method according to any one of claims 1 to 8.
PCT/CN2019/125029 2019-02-18 2019-12-13 Augmented reality display method and apparatus, electronic device, and storage medium WO2020168792A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910120539.3 2019-02-18
CN201910120539.3A CN109903129A (en) 2019-02-18 2019-02-18 Augmented reality display methods and device, electronic equipment, storage medium

Publications (1)

Publication Number Publication Date
WO2020168792A1 true WO2020168792A1 (en) 2020-08-27

Family

ID=66944949

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/125029 WO2020168792A1 (en) 2019-02-18 2019-12-13 Augmented reality display method and apparatus, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN109903129A (en)
WO (1) WO2020168792A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362472A (en) * 2021-05-27 2021-09-07 百度在线网络技术(北京)有限公司 Article display method, apparatus, device, storage medium and program product
US20210406994A1 (en) * 2020-06-24 2021-12-30 Nan Ya Plastics Corporation Cloud-based cyber shopping mall system

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903129A (en) * 2019-02-18 2019-06-18 北京三快在线科技有限公司 Augmented reality display methods and device, electronic equipment, storage medium
CN110473293B (en) 2019-07-30 2023-03-24 Oppo广东移动通信有限公司 Virtual object processing method and device, storage medium and electronic equipment
CN110533780B (en) 2019-08-28 2023-02-24 深圳市商汤科技有限公司 Image processing method and device, equipment and storage medium thereof
CN110544299B (en) * 2019-09-05 2020-10-16 广东电网有限责任公司 Power distribution network 10kV pole load switch 3D model and generation method thereof
CN112535392B (en) * 2019-09-20 2023-03-31 北京外号信息技术有限公司 Article display system based on optical communication device, information providing method, apparatus and medium
US20210097731A1 (en) * 2019-09-26 2021-04-01 Apple Inc. Presenting environment based on physical dimension
CN112819559A (en) * 2019-11-18 2021-05-18 北京沃东天骏信息技术有限公司 Article comparison method and device
CN111063033A (en) * 2019-11-30 2020-04-24 国网辽宁省电力有限公司葫芦岛供电公司 Electric power material goods arrival acceptance method based on augmented reality technology
CN111084980B (en) * 2019-12-25 2023-04-07 网易(杭州)网络有限公司 Method and device for purchasing virtual article
CN111242734A (en) * 2020-01-09 2020-06-05 中移(杭州)信息技术有限公司 Commodity display method, server, terminal, system, electronic equipment and storage medium
CN111274910B (en) * 2020-01-16 2024-01-30 腾讯科技(深圳)有限公司 Scene interaction method and device and electronic equipment
CN111415422B (en) * 2020-04-17 2022-03-18 Oppo广东移动通信有限公司 Virtual object adjustment method and device, storage medium and augmented reality equipment
CN111562845B (en) * 2020-05-13 2022-12-27 如你所视(北京)科技有限公司 Method, device and equipment for realizing three-dimensional space scene interaction
CN111625103A (en) * 2020-06-03 2020-09-04 浙江商汤科技开发有限公司 Sculpture display method and device, electronic equipment and storage medium
CN111580679A (en) * 2020-06-07 2020-08-25 浙江商汤科技开发有限公司 Space capsule display method and device, electronic equipment and storage medium
CN111679741B (en) * 2020-06-08 2023-11-28 浙江商汤科技开发有限公司 Image processing method, device, electronic equipment and storage medium
CN113298588A (en) * 2020-06-19 2021-08-24 阿里巴巴集团控股有限公司 Method and device for providing object information and electronic equipment
CN111709818A (en) * 2020-06-30 2020-09-25 广东奥园奥买家电子商务有限公司 Augmented reality-based commodity information display method and device and computer equipment
CN111897431B (en) * 2020-07-31 2023-07-25 北京市商汤科技开发有限公司 Display method and device, display equipment and computer readable storage medium
CN111918114A (en) * 2020-07-31 2020-11-10 北京市商汤科技开发有限公司 Image display method, image display device, display equipment and computer readable storage medium
CN112016439B (en) * 2020-08-26 2021-06-29 上海松鼠课堂人工智能科技有限公司 Game learning environment creation method and system based on antagonistic neural network
CN112270765A (en) * 2020-10-09 2021-01-26 百度(中国)有限公司 Information processing method, device, terminal, electronic device and storage medium
CN112308980B (en) * 2020-10-30 2024-05-28 脸萌有限公司 Augmented reality interactive display method and device
CN112422901A (en) * 2020-10-30 2021-02-26 哈雷医用(广州)智能技术有限公司 Method and device for generating operation virtual reality video
CN112418995A (en) * 2020-11-26 2021-02-26 珠海格力电器股份有限公司 Online shopping virtual interaction method based on MR virtual reality technology and storage medium
CN112506463A (en) * 2020-12-04 2021-03-16 歌尔光学科技有限公司 Display method, device and equipment based on head-mounted equipment
CN112672185B (en) * 2020-12-18 2023-07-07 脸萌有限公司 Augmented reality-based display method, device, equipment and storage medium
CN112817449B (en) * 2021-01-28 2023-07-21 北京市商汤科技开发有限公司 Interaction method and device for augmented reality scene, electronic equipment and storage medium
CN113947670A (en) * 2021-09-18 2022-01-18 北京城市网邻信息技术有限公司 Information display method and device, electronic equipment and readable medium
CN113902520A (en) * 2021-09-26 2022-01-07 深圳市晨北科技有限公司 Augmented reality image display method, device, equipment and storage medium
CN114167744A (en) * 2021-12-23 2022-03-11 四川启睿克科技有限公司 AR-based household intelligent appliance management method
CN114792364A (en) * 2022-04-01 2022-07-26 广亚铝业有限公司 Aluminum profile door and window projection system and method based on VR technology
CN114935994B (en) * 2022-05-10 2024-07-16 阿里巴巴(中国)有限公司 Article data processing method, apparatus and storage medium
CN115082144A (en) * 2022-05-23 2022-09-20 阿里巴巴(中国)有限公司 Object display method and commodity processing method
CN114972599A (en) * 2022-05-31 2022-08-30 京东方科技集团股份有限公司 Method for virtualizing scene
CN117853694A (en) * 2024-03-07 2024-04-09 河南百合特种光学研究院有限公司 Virtual-real combined rendering method of continuous depth
CN118227009A (en) * 2024-04-11 2024-06-21 北京达佳互联信息技术有限公司 Article interaction method and device based on virtual image and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150077340A1 (en) * 2013-09-18 2015-03-19 Genius Toy Taiwan Co., Ltd. Method, system and computer program product for real-time touchless interaction
CN106817568A (en) * 2016-12-05 2017-06-09 网易(杭州)网络有限公司 A kind of augmented reality display methods and device
CN107862580A (en) * 2017-11-22 2018-03-30 纽世纪(广东)电子商务有限公司 A kind of commodity method for pushing and system
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
CN108492363A (en) * 2018-03-26 2018-09-04 广东欧珀移动通信有限公司 Combined method, device, storage medium based on augmented reality and electronic equipment
CN109213728A (en) * 2017-06-29 2019-01-15 深圳市掌网科技股份有限公司 Cultural relic exhibition method and system based on augmented reality
CN109903129A (en) * 2019-02-18 2019-06-18 北京三快在线科技有限公司 Augmented reality display methods and device, electronic equipment, storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150077340A1 (en) * 2013-09-18 2015-03-19 Genius Toy Taiwan Co., Ltd. Method, system and computer program product for real-time touchless interaction
CN106817568A (en) * 2016-12-05 2017-06-09 网易(杭州)网络有限公司 A kind of augmented reality display methods and device
CN109213728A (en) * 2017-06-29 2019-01-15 深圳市掌网科技股份有限公司 Cultural relic exhibition method and system based on augmented reality
CN107862580A (en) * 2017-11-22 2018-03-30 纽世纪(广东)电子商务有限公司 A kind of commodity method for pushing and system
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
CN108492363A (en) * 2018-03-26 2018-09-04 广东欧珀移动通信有限公司 Combined method, device, storage medium based on augmented reality and electronic equipment
CN109903129A (en) * 2019-02-18 2019-06-18 北京三快在线科技有限公司 Augmented reality display methods and device, electronic equipment, storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210406994A1 (en) * 2020-06-24 2021-12-30 Nan Ya Plastics Corporation Cloud-based cyber shopping mall system
US11580591B2 (en) * 2020-06-24 2023-02-14 Nan Ya Plastics Corporation Cloud-based cyber shopping mall system
CN113362472A (en) * 2021-05-27 2021-09-07 百度在线网络技术(北京)有限公司 Article display method, apparatus, device, storage medium and program product

Also Published As

Publication number Publication date
CN109903129A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
WO2020168792A1 (en) Augmented reality display method and apparatus, electronic device, and storage medium
US12086376B2 (en) Defining, displaying and interacting with tags in a three-dimensional model
US10755485B2 (en) Augmented reality product preview
WO2021018214A1 (en) Virtual object processing method and apparatus, and storage medium and electronic device
US10579134B2 (en) Improving advertisement relevance
US9204131B2 (en) Remote control system
WO2021213067A1 (en) Object display method and apparatus, device and storage medium
US10210664B1 (en) Capture and apply light information for augmented reality
CN111414225B (en) Three-dimensional model remote display method, first terminal, electronic device and storage medium
WO2015161653A1 (en) Terminal operation method and terminal device
CN111047379B (en) House decoration information processing method, device and system
KR20190005082A (en) Method and appratus for providing information on offline merchandise to sales on online through augmented reality
CN110288710B (en) Three-dimensional map processing method and device and terminal equipment
CN110809187B (en) Video selection method, video selection device, storage medium and electronic equipment
WO2017032078A1 (en) Interface control method and mobile terminal
CN112783700A (en) Computer readable medium for network-based remote assistance system
CN112766406A (en) Article image processing method and device, computer equipment and storage medium
US11756260B1 (en) Visualization of configurable three-dimensional environments in a virtual reality system
CN105138763A (en) Method for real scene and reality information superposition in augmented reality
WO2021228200A1 (en) Method for realizing interaction in three-dimensional space scene, apparatus and device
CN114363705A (en) Augmented reality equipment and interaction enhancement method
WO2023124972A1 (en) Display state switching method, apparatus and system, electronic device and storage medium
CN114449355B (en) Live interaction method, device, equipment and storage medium
KR20110085033A (en) Multi-display device and method of providing information using the same
WO2020253342A1 (en) Panoramic rendering method for 3d video, computer device, and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19916307

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19916307

Country of ref document: EP

Kind code of ref document: A1