CN111597465A - Display method and device and electronic equipment - Google Patents

Display method and device and electronic equipment Download PDF

Info

Publication number
CN111597465A
CN111597465A CN202010353008.1A CN202010353008A CN111597465A CN 111597465 A CN111597465 A CN 111597465A CN 202010353008 A CN202010353008 A CN 202010353008A CN 111597465 A CN111597465 A CN 111597465A
Authority
CN
China
Prior art keywords
furniture
dimensional model
house
model
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010353008.1A
Other languages
Chinese (zh)
Inventor
董杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010353008.1A priority Critical patent/CN111597465A/en
Publication of CN111597465A publication Critical patent/CN111597465A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/16Real estate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a display method, a display device and electronic equipment. One embodiment of the method comprises: real-time collecting and displaying a real house image, and displaying at least one candidate furniture identifier; in response to the detection of the selection operation aiming at the candidate furniture identification, determining the candidate furniture identification selected by the selection operation as the target furniture identification, and acquiring a target furniture three-dimensional model indicated by the target furniture identification; in response to detecting the indication operation for indicating the placement position, presenting a first enhanced image at the placement position of the presented real house image, wherein the first enhanced image is an image corresponding to the target furniture three-dimensional model. Therefore, a new display mode can be provided.

Description

Display method and device and electronic equipment
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a display method and apparatus, and an electronic device.
Background
With the development of the internet, users increasingly use terminal devices to realize various functions. For example, a user can browse and search house source information through the terminal device, and therefore the user can obtain more house source information without going home. Or, the user can screen out the house source of the heart instrument of the user through the house source information on the network, and the house source is bought to the broker on the spot.
Disclosure of Invention
This disclosure is provided to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The embodiment of the disclosure provides a display method, a display device and electronic equipment.
In a first aspect, an embodiment of the present disclosure provides a display method, where the method includes: real-time collecting and displaying a real house image, and displaying at least one candidate furniture identifier; in response to detecting a selection operation aiming at a candidate furniture identifier, determining the candidate furniture identifier selected by the selection operation as a target furniture identifier, and acquiring a target furniture three-dimensional model indicated by the target furniture identifier; in response to the detection of the indicating operation for indicating the placing position, displaying a first enhanced image at the placing position of the displayed real house image, wherein the first enhanced image is an image corresponding to the target furniture three-dimensional model.
In a second aspect, an embodiment of the present disclosure provides a display device, including: the first display unit is used for acquiring and displaying a real house image in real time and displaying at least one candidate furniture identifier; the device comprises a first obtaining unit, a second obtaining unit and a third obtaining unit, wherein the first obtaining unit is used for responding to the detection of a selection operation aiming at a candidate furniture identifier, determining the candidate furniture identifier selected by the selection operation as a target furniture identifier, and obtaining a target furniture three-dimensional model indicated by the target furniture identifier; and the second display unit is used for responding to the detection of the indication operation for indicating the placement position, and displaying a first enhanced image at the placement position of the displayed real house image, wherein the first enhanced image is an image corresponding to the target furniture three-dimensional model.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the presentation method as described in the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the steps of the presentation method as described in the first aspect.
According to the display method, the display device and the electronic equipment, the real house image is collected and displayed firstly, then the target furniture three-dimensional model is determined according to the selection operation of the user on the candidate furniture identifier, finally the placement position indicated by the indication operation of the user is determined, and the corresponding enhancement of the target furniture three-dimensional model is displayed at the displayed placement position of the house; therefore, a new display mode can be provided, the enhanced image of the furniture image can be displayed on the real-time collected display house image, the real effect of displaying the target furniture in the house can be provided for the user to judge, and the time cost and the economic cost of the user are saved; in other words, if the real effect is not used as the judgment basis of the user, the user may buy furniture with various problems such as improper size, unmatched style, and the like, consuming a lot of time cost and economic cost.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flow chart diagram illustrating one embodiment of a method in accordance with the present disclosure;
FIG. 2 is a flow chart of yet another embodiment of a demonstration method according to the present disclosure;
FIG. 3 is a flow chart illustrating yet another embodiment of a method according to the present disclosure;
FIG. 4 is a schematic structural diagram of one embodiment of a display device according to the present disclosure;
FIG. 5 is an exemplary system architecture to which the presentation method of one embodiment of the present disclosure may be applied;
fig. 6 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Referring to fig. 1, a flow of one embodiment of a presentation method according to the present disclosure is shown. The demonstration method as shown in fig. 1 comprises the following steps:
step 101, real-time collecting and displaying a real-house image, and displaying at least one candidate furniture identifier.
In this embodiment, an execution subject (for example, a terminal device) of the presentation method may acquire a real-house image in real time and present the real-house image. And, the execution subject may also present at least one candidate furniture identification.
In this embodiment, the execution main body may acquire a real-world image related to a house through a camera; and then, constructing a house three-dimensional model according to the real world image related to the house through the execution main body or a server end in communication connection with the execution main body.
In this embodiment, the execution subject may render the three-dimensional house model according to its pose (position and posture). By way of example, a three-dimensional image rendering pipeline technique may be employed to convert a three-dimensional model of a house into a two-dimensional image; and then displaying the two-dimensional image obtained by the conversion.
In this embodiment, the execution subject can display at least one candidate furniture identifier on an interface displaying a house image.
Here, the candidate furniture identification may indicate a three-dimensional model of the candidate furniture. The form of the candidate furniture identifier may be various and is not limited herein. As an example, the candidate furniture identification may be a name or an image.
In some application scenarios, three-dimensional furniture models of various furniture can be prepared in advance, and then the three-dimensional furniture models are stored in association with furniture identifiers. And then, all or part of the manufactured three-dimensional furniture models can be used as candidate three-dimensional furniture models and candidate furniture identifiers for displaying the candidate three-dimensional furniture models.
As an example, a three-dimensional model of a sliding door wardrobe can be made from a wardrobe with a sliding door, the furniture being identified as "sliding door wardrobe"; a three-dimensional model of the wardrobe with the opening and closing door can be manufactured according to the wardrobe with the opening and closing door, and the furniture is marked as the wardrobe with the opening and closing door.
Step 102, in response to detecting a selection operation for the candidate furniture identifier, determining the candidate furniture identifier selected by the selection operation as a target furniture identifier, and acquiring a target furniture three-dimensional model indicated by the target furniture identifier.
In this embodiment, in response to detecting a selection operation for a candidate furniture identifier, the executing body may determine the candidate furniture identifier selected by the selection operation as a target furniture identifier, and acquire a three-dimensional model of the target furniture indicated by the target furniture identifier.
Here, the three-dimensional model of the candidate furniture indicated by the candidate furniture identifier may be stored locally in the execution subject, or may be stored in the server.
Here, the executing agent may obtain, from a local or a server, the three-dimensional model of the target furniture indicated by the target furniture identifier after determining the target furniture identifier.
As an example, the user selects "door opening and closing wardrobe", and may obtain a three-dimensional door opening and closing wardrobe model as the target furniture three-dimensional model.
Step 103, in response to detecting the indication operation for indicating the placement position, presenting the first augmented image at the placement position of the real-house image.
In this embodiment, the execution subject described above may display the first enhanced image superimposed on the displayed house image displayed in step 101 in response to detection of the instruction operation for instructing the placement position.
Here, the position of the above-mentioned instruction operation may be a position operated by the user on the screen, and the above-mentioned placement position may indicate a spatial position to which the user desires to place.
Here, the specific implementation form of the above-described instruction operation may be various, and is not limited herein.
As an example, the indication operation may be a drag operation on the target furniture identifier, and the end position of the drag operation may be the drop position.
As an example, the indication operation may include a click operation, in some implementation scenarios, a user may click a target furniture identifier first, then click a certain position in the screen, and then click the position of the control indicated by the position to be a placement position, and then click the screen to be an indication operation.
Here, the position of the superimposed presentation may be a placement position indicated by the instruction operation.
Here, the first enhanced image is an image corresponding to the three-dimensional model of the target furniture.
Here, the displaying of the first enhanced image by the superposition may be implemented in various ways, and is not limited herein.
As an example, the image of the indicated position may be used as an anchor point, and the first augmented display image may be superimposed on the anchor point on the real-house image to be displayed later.
It should be noted that, in the display method provided in this embodiment, the real-house image is collected and displayed first, then the target furniture three-dimensional model is determined according to the selection operation of the user on the candidate furniture identifier, and finally the placement position indicated by the instruction operation of the user is determined, and the corresponding enhancement of the target furniture three-dimensional model is displayed at the displayed placement position of the house; therefore, a new display mode can be provided, the enhanced image of the furniture image can be displayed on the real-time collected display house image, the real effect of displaying the target furniture in the house can be provided for the user to judge, and the time cost and the economic cost of the user are saved; in other words, if the real effect is not used as the judgment basis of the user, the user may buy furniture with various problems such as improper size, unmatched style, and the like, consuming a lot of time cost and economic cost.
In some embodiments, the step 103 may include: obtaining a first video obtained by rendering the first derived three-dimensional model; and displaying the first video.
Here, the first derived three-dimensional model is derived from a combination of the target furniture three-dimensional model and a house three-dimensional model, the house three-dimensional model corresponding to the real house image.
It can be understood that the first video may be obtained by rendering the first derived three-dimensional model according to the current pose of the execution subject.
Here, the three-dimensional house model may be constructed in real time based on real house images acquired in real time, may be pre-established, or may be obtained by modifying an initial model based on real house images acquired in real time.
In some application scenarios, the three-dimensional house model constructed by the real house image can be used to determine the three-dimensional house model corresponding to the real house image.
Here, the first derived three-dimensional model includes a target furniture three-dimensional model and the house three-dimensional model, and the first video may include the real house image and the first augmented image.
Here, the execution subject may render the first derived three-dimensional model according to its pose (position and posture). As an example, a three-dimensional image rendering pipeline technique may be employed to convert the first derived three-dimensional model into a two-dimensional image; and then displaying the two-dimensional image obtained by the conversion. It is understood that the two-dimensional graphics obtained by conversion are arranged according to time to obtain the first video.
It should be noted that the first derived three-dimensional model obtained by combining the target furniture three-dimensional model and the house three-dimensional model can be fused with the house three-dimensional model and the target furniture three-dimensional model, so that the relative position between the target furniture three-dimensional model and the house three-dimensional model is unchanged in the process of executing the main body pose transformation, the virtual sense of an image obtained by rendering based on the target furniture three-dimensional model can be reduced, and the reality sense of the displayed target furniture in a house can be improved.
In some embodiments, the first derived three-dimensional model may be generated by the following first generation step: acquiring a house three-dimensional model corresponding to the real house image; determining a combination position in the three-dimensional house model according to the placement position, wherein the combination position is the combination position of the three-dimensional target furniture model and the three-dimensional house model; and combining the target furniture three-dimensional model with the house three-dimensional model at the combination position to obtain a first derivative three-dimensional model.
Here, the electronic device that executes the first generation step may be the execution subject or may be another electronic device other than the execution subject.
Here, as described above, the above-mentioned placing position may indicate a spatial position to which the user desires to place the furniture. It can be understood that the three-dimensional model of the house has a mapping relationship with the three-dimensional space. The spatial location has a mapping point in the three-dimensional model of the house, and this mapping point can be understood as a combination location of the three-dimensional model of the target furniture and the three-dimensional model of the house.
Here, the target furniture three-dimensional model may be added to the house three-dimensional model at the joining location to obtain a first derivative three-dimensional model.
It should be noted that, by determining a combination position according to the placement position, and then combining the target furniture three-dimensional model with the house three-dimensional model at the combination position to obtain the first derivative three-dimensional model, an accurate first derivative model can be obtained according to the position indicated by the user.
In some embodiments, the method may further include: in response to detecting the indication operation, determining whether the target furniture model and a placed object model in the house three-dimensional model have a first position conflict relationship according to the placement position.
Here, the placed object model may be a three-dimensional model of an object that has been placed in the house. The above-mentioned already placed objects may include, but are not limited to, at least one of the following: house wall and furniture in house.
As an example, it may be determined whether there is a placed object model in the sphere region with the placement position as a center and a preset distance as a sphere radius; and if so, determining that the target furniture three-dimensional model and the placed object model in the house three-dimensional model have a first position conflict relationship.
In some application scenarios, if it is determined that the first position conflict relationship exists, the execution subject may determine the final placement position of the three-dimensional model of the target furniture to be near the placement position indicated by the operation instruction, that is, the three-dimensional model of the target furniture cannot be placed at the placement position indicated by the operation instruction, so as to remind the user that the target furniture cannot be placed at the placement position indicated by the operation instruction.
It should be noted that, by determining whether the target furniture three-dimensional model and the placed object model in the house three-dimensional model have the first position conflict relationship based on the placement position, it is determined whether the target furniture and the existing object in the orientation have the position conflict, so as to determine whether the furniture can be applied to the house, and therefore, the following problems can be avoided: furniture cannot be applied to the house.
In some embodiments, the determining whether there is a first location conflict relationship may include: determining a candidate placement area of the target furniture three-dimensional model in the house three-dimensional model according to the placement position and the size of the target furniture model; determining whether a placed object model is in the candidate placement region; determining to have a first location conflict relationship in response to determining to have a placed object model within the candidate placement region.
Here, by the placement position and the size of the target furniture model, the candidate placement region can be accurately determined for the target furniture model, thereby improving the accuracy of determining whether or not there is the first position conflict relationship.
In some embodiments, the method may further include: in response to determining that there is a first location conflict relationship, presenting first prompt information.
Here, the first prompt message may be used to prompt that there is a positional conflict between the three-dimensional model of the target furniture and the model of the placed object. In other words, the first cue information may be used to cue that there is a conflict in location between the target furniture and the placed object in the room, i.e., that the target furniture and the placed object in the room are unlikely to be present at the same time (colloquially, not to be left open).
Here, the form of the first prompt message may be various, and is not limited herein.
As an example, the presenting manner of the first prompt message may include at least one of, but is not limited to: marking the conflict region of the component sub-model and the placed object model by a preset pattern (such as highlight, red and circle); and sending out an alarm prompt tone.
It should be noted that, by presenting the first prompt information when the first position conflict relationship exists, the user can be timely reminded that the target furniture cannot be placed at the placement position indicated by the prompt operation, and the user is prevented from purchasing the furniture due to the wrong estimation (so that the target furniture can be placed at the placement position).
In some embodiments, the above method further comprises: in response to the detection of the first adjusting operation, adjusting the target furniture three-dimensional model according to the adjusted state indicated by the first adjusting operation, and displaying based on the adjusted target furniture three-dimensional model. Here, the first adjustment operation is to adjust the display state of the displayed three-dimensional model of the target furniture.
Here, the specific implementation manner of the first adjusting operation may be set according to practical situations, and is not limited herein.
As an example, the first adjustment operation described above may include a zoom-in and/or zoom-out operation on the presentation area of the target furniture model.
As an example, the first adjustment operation may include a selection operation on the three-dimensional model of the target furniture and a click operation on a preset control; in other words, after the selection operation is performed, the displayed model is zoomed in and zoomed out by clicking the preset control.
Here, the above-mentioned exhibition state may include, but is not limited to, at least one of the following: size, color, location, etc.
Here, performing the presentation based on the adjusted target furniture three-dimensional model may include combining the target furniture three-dimensional model with the house three-dimensional model to obtain a new derivative model, then generating a new video according to the current pose of the execution subject and the derivative model, and presenting the new video onto the execution subject.
It should be noted that, through the first adjustment operation, the displayed target furniture three-dimensional model can be adjusted according to the user expectation, so that the user can adjust the virtual target furniture three-dimensional model according to the user expectation, and the adjusted target furniture three-dimensional model is displayed in the image of the real house as the enhanced image, so that the real display effect of the expected target furniture in the house can be provided for the user, the user can conveniently make a judgment according to the real display effect, and the time cost and the economic cost of the user are saved; in other words, if the real presentation effect is not used as the basis for the user's judgment, the user may buy furniture having various problems such as an improper size, a mismatched style, and the like, which consumes a lot of time and economic costs.
In some embodiments, the demonstration state comprises a model size, and the first adjustment operation comprises a size adjustment operation. The adjusting the target furniture three-dimensional model according to the first adjusted state indicated by the first adjusting operation comprises: and adjusting the size of the target furniture three-dimensional model according to the adjusted size indicated by the size adjusting operation.
Here, the user can adjust the size of the furniture model, whereby if the user is satisfied with the style of furniture and the size conflicts with the original arrangement of the house, it is possible to virtually see whether or not the furniture whose size can be changed fits the house by setting a different furniture size.
In some embodiments, the method further comprises: and displaying the adjusted size.
It should be noted that the adjusted size is displayed to the user, so that the user can conveniently select and purchase furniture according to the adjusted size, the link of measuring the size on the spot by the user is saved, and the problem that the user purchases furniture with an improper size is avoided.
In some embodiments, the target furniture three-dimensional model includes at least two component sub-models.
As an example, a door opening and closing wardrobe may include a wardrobe body and a wardrobe door. The three-dimensional model of the opening and closing furniture can comprise a cabinet body sub-model and a cabinet door sub-model.
In some embodiments, the method may further include: in response to detecting the second adjustment operation, the component sub-model is adjusted according to a second adjusted state indicated by the second adjustment operation and is displayed based on the adjusted component sub-model.
Here, the above-described second adjustment operation is used to adjust the presentation state of the presented component submodel.
In some embodiments, the display status of the component submodel may include, but is not limited to, at least one of: color, display location.
Here, the performing of the exhibition based on the adjusted component submodel may include combining the adjusted component submodel with the house three-dimensional model to obtain a new derivative model, and then generating a new video according to the current pose of the execution subject and the derivative model, and displaying the new video on the execution subject.
It should be noted that, the above-mentioned three-dimensional model of the target furniture is subdivided into the component submodels, so that the user can further make a judgment according to the state of the component submodel in the house to determine whether each component in the furniture can be applied to the house, and thus, the following problems can be avoided: the furniture as a whole can be applied to the house without the parts of the furniture matching the house.
In some embodiments, the executing agent may determine whether the adjusted component sub-model and the placed object model in the three-dimensional house model have the second position conflict relationship according to the adjusted display position; in response to determining that there is a second location conflict relationship, presenting second prompt information.
In some application scenarios, for opening and closing a cabinet door sub-model in a three-dimensional model of a door wardrobe, the display position may indicate a closed state or an open state (including an opening degree). The user can carry out the second adjustment operation to the degree of opening of cabinet door submodel, show the position after the adjustment of instruction cabinet door submodel. The adjusted cabinet door sub-model may have a second position conflict relationship with the placed object model in the house. Mapping to a real scene, that is, when the door of the door-opening/closing wardrobe is opened, collision with an existing object in the house may occur.
In some embodiments, in response to determining that there is a second location conflict relationship, a second prompt message may be presented.
Here, the presenting manner of the second prompt information may be set according to an actual application scenario, and is not limited herein.
As an example, the presenting manner of the second prompt message may include at least one of, but is not limited to: marking the conflict region of the component sub-model and the placed object model by a preset pattern (such as highlight, red and circle); and sending out an alarm prompt tone.
It should be noted that, the above-mentioned target furniture three-dimensional model is subdivided into component submodels, and the display states of the component submodels include display positions, so that a user can further make a judgment according to the usage of furniture in a house, and determine that various position states of furniture do not conflict with existing objects in the directions, so as to determine whether the furniture can be applied to the house, thereby avoiding the following problems: the furniture as a whole can be applied to the house without the use of the furniture matching the house.
In some embodiments, the above method further comprises: displaying the candidate furniture identification and the link address in an associated mode; and jumping to the webpage indicated by the link address in response to detecting the trigger operation of the link address.
Here, the web page indicated by the link address may be related to the candidate furniture indicated by the candidate furniture identifier.
As an example, the web page may be a profile web page for the candidate furniture indicated by the candidate furniture.
As an example, the web page is a purchase web page of the candidate furniture indicated by the candidate furniture identification.
In some application scenarios, the link address may indicate a purchase web page. After determining the candidate furniture as the target furniture and checking the effect of the target furniture in the house in an augmented reality mode, the user may want to purchase the target furniture, so that the process of finding the target furniture of the style by the user can be saved by providing the purchase link, and the time of the user is saved.
It should be noted that by providing the link address related to the candidate furniture, when the user wants to know more related information of the candidate furniture, the related information of the candidate furniture is presented to the user, so as to save the search and search time of the user for the related information.
In some embodiments, the above method further comprises: and displaying the candidate furniture identification in association with the marking control.
Here, the tagging control may be used to tag candidate furniture identifications.
In some application scenarios, the aforementioned markup controls may include, but are not limited to, at least one of: a focus adding control, a collection control, a shopping cart adding control and the like.
In some application scenarios, a user may mark a trigger marking control, and may then mark candidate furniture identifications.
It should be noted that, by displaying the marking control, a mark can be added to the candidate furniture identifier, so that the candidate furniture that the user may subsequently turn over can be recorded in time, and thus, when the user desires to turn over the candidate furniture that is interested in before, the search time of the user can be saved.
Referring to fig. 2, a flow of one embodiment of a presentation method according to the present disclosure is shown. The demonstration method as shown in fig. 2 comprises the following steps:
step 201, real-time collecting and displaying real house images, and displaying at least one candidate furniture identifier.
Step 202, in response to detecting a selection operation for the candidate furniture identifier, determining the candidate furniture identifier selected by the selection operation as a target furniture identifier, and acquiring a target furniture three-dimensional model indicated by the target furniture identifier.
Step 203, a first video obtained by rendering the first derived three-dimensional model is obtained.
Step 204, displaying the first video.
It should be noted that details of implementation and technical effects of the display method provided in this embodiment may refer to descriptions of other parts in this disclosure, and are not described herein again.
Referring to fig. 3, a flow of one embodiment of a presentation method according to the present disclosure is shown. The demonstration method as shown in fig. 3 comprises the following steps:
step 301, real-time collecting and displaying a real-world house image, and displaying at least one candidate furniture identifier.
Step 302, in response to detecting a selection operation for the candidate furniture identifier, determining the candidate furniture identifier selected by the selection operation as a target furniture identifier, and acquiring a target furniture three-dimensional model indicated by the target furniture identifier
Step 303, in response to detecting the indication operation for indicating the placement position, presenting a first augmented image at the placement position of the presented real house image, and determining whether the target furniture three-dimensional model and the placed object model in the house three-dimensional model have a first position conflict relationship based on the placement position.
Step 304, in response to determining that the first position conflict relationship exists, presenting first prompt information.
And 305, in response to the detection of the first adjusting operation, adjusting the target furniture three-dimensional model according to a first adjusted state indicated by the first adjusting operation, and displaying the target furniture three-dimensional model based on the adjusted target furniture three-dimensional model.
And step 306, in response to the detection of the second adjusting operation, adjusting the component sub-model according to a second adjusted state indicated by the second adjusting operation and displaying the component sub-model based on the adjusted component sub-model.
And 307, determining whether the adjusted part sub-model and the placed object model in the house three-dimensional model have a second position conflict relationship according to the adjusted display position.
And step 308, presenting second prompt information in response to determining that the second position conflict relationship exists.
It should be noted that details of implementation and technical effects of the display method provided in this embodiment may refer to descriptions of other parts in this disclosure, and are not described herein again.
With further reference to fig. 4, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of a display apparatus, which corresponds to the embodiment of the method shown in fig. 1, and which may be applied in various electronic devices.
As shown in fig. 4, the display device of the present embodiment includes: a first presentation unit 401, a first acquisition unit 402 and a second presentation unit 403; the first display unit is used for acquiring and displaying a real house image in real time and displaying at least one candidate furniture identifier; the device comprises a first obtaining unit, a second obtaining unit and a third obtaining unit, wherein the first obtaining unit is used for responding to the detection of a selection operation aiming at a candidate furniture identifier, determining the candidate furniture identifier selected by the selection operation as a target furniture identifier, and obtaining a target furniture three-dimensional model indicated by the target furniture identifier; and the second display unit is used for responding to the detection of the indication operation for indicating the placement position, and displaying a first enhanced image at the placement position of the displayed real house image, wherein the first enhanced image is an image corresponding to the target furniture three-dimensional model.
In this embodiment, specific processing of the first displaying unit 401, the first obtaining unit 402 and the second displaying unit 4033 of the displaying apparatus and technical effects thereof may refer to related descriptions of step 101, step 102 and step 103 in the corresponding embodiment of fig. 1, which are not described herein again.
In some embodiments, the second display unit is further configured to: obtaining a first video obtained by rendering a first derived three-dimensional model, wherein the first derived three-dimensional model is obtained by combining the target furniture three-dimensional model and a house three-dimensional model, and the house three-dimensional model corresponds to the displayed real house image; and displaying the first video.
In some embodiments, the first derived three-dimensional model is generated by a first generation step of: acquiring a house three-dimensional model corresponding to the displayed real house image; determining a combination position in the three-dimensional house model according to the placement position, wherein the combination position is the combination position of the three-dimensional target furniture model and the three-dimensional house model; and combining the target furniture three-dimensional model with the house three-dimensional model at the combination position to obtain the first derivative three-dimensional model.
In some embodiments, the apparatus is further configured to: in response to detecting the indication operation, determining whether the target furniture three-dimensional model and a placed object model in the house three-dimensional model have a first position conflict relationship based on the placement position.
In some embodiments, the apparatus is further configured to: in response to determining that there is a first location conflict relationship, presenting first prompt information.
In some embodiments, said determining whether said target furniture three-dimensional model and said placed object model in said house three-dimensional model have a first positional conflict relationship comprises: determining a candidate placement area of the target furniture three-dimensional model in the house three-dimensional model according to the placement position and the size of the target furniture model; determining to have a first location conflict relationship in response to determining to have a placed object model within the candidate placement region.
In some embodiments, the apparatus is further configured to: in response to detecting a first adjusting operation, adjusting the target furniture three-dimensional model according to a first adjusted state indicated by the first adjusting operation, and displaying based on the adjusted target furniture three-dimensional model, wherein the first adjusting operation is used for adjusting the displayed state of the displayed target furniture three-dimensional model.
In some embodiments, the presentation state comprises a model size, and the first adjustment operation comprises a resizing operation; and adjusting the target furniture three-dimensional model according to the first adjusted state indicated by the first adjusting operation, wherein the adjusting comprises: and adjusting the size of the target furniture three-dimensional model according to the adjusted size indicated by the size adjusting operation.
In some embodiments, the apparatus is further configured to: and displaying the adjusted size.
In some embodiments, the target furniture three-dimensional model comprises at least two component sub-models; and the apparatus is further configured to: in response to detecting a second adjusting operation, adjusting the component sub-model according to a second adjusted state indicated by the second adjusting operation, and displaying based on the adjusted component sub-model, wherein the second adjusting operation is used for adjusting the displaying state of the displayed component sub-model.
In some embodiments, the display state of the component submodel includes a display position; and the apparatus is further configured to: determining whether the adjusted component sub-model and a placed object model in the house three-dimensional model have a second position conflict relationship or not according to the adjusted display position; in response to determining that there is a second location conflict relationship, presenting second prompt information.
In some embodiments, the apparatus is further configured to: displaying the candidate furniture identification and the link address in an associated mode; and jumping to the webpage indicated by the link address in response to detecting the trigger operation of the link address.
In some embodiments, the apparatus is further configured to: and displaying the candidate furniture identification in association with a marking control, wherein the marking control is used for marking the candidate furniture identification.
Referring to fig. 5, fig. 5 illustrates an exemplary system architecture to which the presentation method of one embodiment of the present disclosure may be applied.
As shown in fig. 5, the system architecture may include terminal devices 501, 502, 503, a network 504, and a server 505. The network 504 serves to provide a medium for communication links between the terminal devices 501, 502, 503 and the server 505. Network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 501, 502, 503 may interact with a server 505 over a network 504 to receive or send messages or the like. The terminal devices 501, 502, 503 may have various client applications installed thereon, such as a web browser application, a search-type application, and a news-information-type application. The client application in the terminal device 501, 502, 503 may receive the instruction of the user, and complete the corresponding function according to the instruction of the user, for example, add the corresponding information in the information according to the instruction of the user.
The terminal devices 501, 502, 503 may be hardware or software. When the terminal devices 501, 502, 503 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal devices 501, 502, and 503 are software, they can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 505 may be a server providing various services, for example, receiving an information acquisition request sent by the terminal device 501, 502, 503, and acquiring the presentation information corresponding to the information acquisition request in various ways according to the information acquisition request. And the relevant data of the presentation information is sent to the terminal equipment 501, 502, 503.
It should be noted that the display method provided by the embodiment of the present disclosure may be executed by a terminal device, and accordingly, the display apparatus may be disposed in the terminal device 501, 502, 503. In addition, the display method provided by the embodiment of the disclosure can also be executed by the server 505, and accordingly, the display apparatus can be disposed in the server 505.
It should be understood that the number of terminal devices, networks, and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to fig. 6, shown is a schematic diagram of an electronic device (e.g., a terminal device or a server of fig. 5) suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: real-time collecting and displaying a real house image, and displaying at least one candidate furniture identifier; in response to detecting a selection operation aiming at a candidate furniture identifier, determining the candidate furniture identifier selected by the selection operation as a target furniture identifier, and acquiring a target furniture three-dimensional model indicated by the target furniture identifier; in response to the detection of the indicating operation for indicating the placing position, displaying a first enhanced image at the placing position of the displayed real house image, wherein the first enhanced image is an image corresponding to the target furniture three-dimensional model.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first presentation unit may also be described as a "unit presenting a real house image".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (16)

1. A method of displaying, comprising:
real-time collecting and displaying a real house image, and displaying at least one candidate furniture identifier;
in response to detecting a selection operation aiming at a candidate furniture identifier, determining the candidate furniture identifier selected by the selection operation as a target furniture identifier, and acquiring a target furniture three-dimensional model indicated by the target furniture identifier;
in response to the detection of the indicating operation for indicating the placing position, displaying a first enhanced image at the placing position of the displayed real house image, wherein the first enhanced image is an image corresponding to the target furniture three-dimensional model.
2. The method of claim 1, wherein presenting a first augmented image at a placement location of the presented real-house image in response to detecting an indication operation to indicate the placement location comprises:
obtaining a first video obtained by rendering a first derived three-dimensional model, wherein the first derived three-dimensional model is obtained by combining the target furniture three-dimensional model and a house three-dimensional model, and the house three-dimensional model corresponds to the displayed real house image;
and displaying the first video.
3. The method of claim 2, wherein the first derived three-dimensional model is generated by a first generating step comprising:
acquiring a house three-dimensional model corresponding to the displayed real house image;
determining a combination position in the three-dimensional house model according to the placement position, wherein the combination position is the combination position of the three-dimensional target furniture model and the three-dimensional house model;
and combining the target furniture three-dimensional model with the house three-dimensional model at the combination position to obtain the first derivative three-dimensional model.
4. The method of claim 1, further comprising:
in response to detecting the indication operation, determining whether the target furniture three-dimensional model and a placed object model in the house three-dimensional model have a first position conflict relationship based on the placement position.
5. The method of claim 4, further comprising:
in response to determining that there is a first location conflict relationship, presenting first prompt information.
6. The method of claim 4, wherein said determining whether the target furniture three-dimensional model and the placed object model in the house three-dimensional model have a first positional conflict relationship comprises:
determining a candidate placement area of the target furniture three-dimensional model in the house three-dimensional model according to the placement position and the size of the target furniture model;
determining to have a first location conflict relationship in response to determining to have a placed object model within the candidate placement region.
7. The method of claim 1, further comprising:
in response to detecting a first adjusting operation, adjusting the target furniture three-dimensional model according to a first adjusted state indicated by the first adjusting operation, and displaying based on the adjusted target furniture three-dimensional model, wherein the first adjusting operation is used for adjusting the displayed state of the displayed target furniture three-dimensional model.
8. The method of claim 7, wherein the presentation state comprises a model size, and the first resize operation comprises a resize operation; and
the adjusting the target furniture three-dimensional model according to the first adjusted state indicated by the first adjusting operation comprises:
and adjusting the size of the target furniture three-dimensional model according to the adjusted size indicated by the size adjusting operation.
9. The method of claim 8, further comprising:
and displaying the adjusted size.
10. The method of claim 1, wherein the target furniture three-dimensional model comprises at least two component sub-models; and
the method further comprises the following steps:
in response to detecting a second adjusting operation, adjusting the component sub-model according to a second adjusted state indicated by the second adjusting operation, and displaying based on the adjusted component sub-model, wherein the second adjusting operation is used for adjusting the displaying state of the displayed component sub-model.
11. The method of claim 10, wherein the display state of the component submodel includes a display position; and
the method further comprises the following steps:
determining whether the adjusted component sub-model and a placed object model in the house three-dimensional model have a second position conflict relationship or not according to the adjusted display position;
in response to determining that there is a second location conflict relationship, presenting second prompt information.
12. The method of claim 1, further comprising:
displaying the candidate furniture identification and the link address in an associated mode;
and jumping to the webpage indicated by the link address in response to detecting the trigger operation of the link address.
13. The method of claim 1, further comprising:
and displaying the candidate furniture identification in association with a marking control, wherein the marking control is used for marking the candidate furniture identification.
14. A display device, comprising:
the first display unit is used for acquiring and displaying a real house image in real time and displaying at least one candidate furniture identifier;
the device comprises a first obtaining unit, a second obtaining unit and a third obtaining unit, wherein the first obtaining unit is used for responding to the detection of a selection operation aiming at a candidate furniture identifier, determining the candidate furniture identifier selected by the selection operation as a target furniture identifier, and obtaining a target furniture three-dimensional model indicated by the target furniture identifier;
and the second display unit is used for responding to the detection of the indication operation for indicating the placement position, and displaying a first enhanced image at the placement position of the displayed real house image, wherein the first enhanced image is an image corresponding to the target furniture three-dimensional model.
15. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-13.
16. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-13.
CN202010353008.1A 2020-04-28 2020-04-28 Display method and device and electronic equipment Pending CN111597465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010353008.1A CN111597465A (en) 2020-04-28 2020-04-28 Display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010353008.1A CN111597465A (en) 2020-04-28 2020-04-28 Display method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111597465A true CN111597465A (en) 2020-08-28

Family

ID=72187721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010353008.1A Pending CN111597465A (en) 2020-04-28 2020-04-28 Display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111597465A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112130726A (en) * 2020-09-25 2020-12-25 北京五八信息技术有限公司 Page operation method and device, electronic equipment and computer readable medium
CN113190909A (en) * 2021-05-21 2021-07-30 杭州群核信息技术有限公司 Method, device and storage medium for determining position reasonability of target object
CN113313812A (en) * 2020-09-16 2021-08-27 阿里巴巴集团控股有限公司 Furniture display and interaction method and device, electronic equipment and storage medium
CN114095719A (en) * 2021-11-16 2022-02-25 北京城市网邻信息技术有限公司 Image display method, image display device and storage medium
CN114359522A (en) * 2021-12-23 2022-04-15 阿依瓦(北京)技术有限公司 AR model placing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648276A (en) * 2018-05-17 2018-10-12 上海宝冶集团有限公司 A kind of construction and decoration design method, device, equipment and mixed reality equipment
CN108959668A (en) * 2017-05-19 2018-12-07 深圳市掌网科技股份有限公司 The Home Fashion & Design Shanghai method and apparatus of intelligence
CN110662015A (en) * 2018-06-29 2020-01-07 北京京东尚科信息技术有限公司 Method and apparatus for displaying image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959668A (en) * 2017-05-19 2018-12-07 深圳市掌网科技股份有限公司 The Home Fashion & Design Shanghai method and apparatus of intelligence
CN108648276A (en) * 2018-05-17 2018-10-12 上海宝冶集团有限公司 A kind of construction and decoration design method, device, equipment and mixed reality equipment
CN110662015A (en) * 2018-06-29 2020-01-07 北京京东尚科信息技术有限公司 Method and apparatus for displaying image

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313812A (en) * 2020-09-16 2021-08-27 阿里巴巴集团控股有限公司 Furniture display and interaction method and device, electronic equipment and storage medium
CN112130726A (en) * 2020-09-25 2020-12-25 北京五八信息技术有限公司 Page operation method and device, electronic equipment and computer readable medium
CN113190909A (en) * 2021-05-21 2021-07-30 杭州群核信息技术有限公司 Method, device and storage medium for determining position reasonability of target object
CN114095719A (en) * 2021-11-16 2022-02-25 北京城市网邻信息技术有限公司 Image display method, image display device and storage medium
CN114095719B (en) * 2021-11-16 2023-07-25 北京城市网邻信息技术有限公司 Image display method, image display device and storage medium
CN114359522A (en) * 2021-12-23 2022-04-15 阿依瓦(北京)技术有限公司 AR model placing method and device

Similar Documents

Publication Publication Date Title
CN111597465A (en) Display method and device and electronic equipment
CN105229566B (en) Indicating observations or visual patterns in augmented reality systems
CN109308469B (en) Method and apparatus for generating information
EP2974509B1 (en) Personal information communicator
JP2020521376A (en) Agent decisions to perform actions based at least in part on image data
CN109803008B (en) Method and apparatus for displaying information
CN107329671B (en) Model display method and device
CN111597466A (en) Display method and device and electronic equipment
CN111599022A (en) House display method and device and electronic equipment
CN111652675A (en) Display method and device and electronic equipment
US10424009B1 (en) Shopping experience using multiple computing devices
CN111309240B (en) Content display method and device and electronic equipment
CN114417782A (en) Display method and device and electronic equipment
CN111767456A (en) Method and device for pushing information
CN113989470A (en) Picture display method and device, storage medium and electronic equipment
WO2024051639A1 (en) Image processing method, apparatus and device, and storage medium and product
CN111612135B (en) Method and device for information interaction
CN112766406A (en) Article image processing method and device, computer equipment and storage medium
CN111710017A (en) Display method and device and electronic equipment
CN113436346A (en) Distance measuring method and device in three-dimensional space and storage medium
CN110619615A (en) Method and apparatus for processing image
CN111597414B (en) Display method and device and electronic equipment
CN111586295B (en) Image generation method and device and electronic equipment
CN114417214A (en) Information display method and device and electronic equipment
CN114547430A (en) Information object label labeling method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination