CN116389791A - Home scene generation method and device, electronic equipment and readable storage medium - Google Patents

Home scene generation method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116389791A
CN116389791A CN202310306014.5A CN202310306014A CN116389791A CN 116389791 A CN116389791 A CN 116389791A CN 202310306014 A CN202310306014 A CN 202310306014A CN 116389791 A CN116389791 A CN 116389791A
Authority
CN
China
Prior art keywords
commodity
home decoration
model
scene
home
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310306014.5A
Other languages
Chinese (zh)
Inventor
陈右任
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Leiniao Network Media Co ltd
Original Assignee
Shenzhen Leiniao Network Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Leiniao Network Media Co ltd filed Critical Shenzhen Leiniao Network Media Co ltd
Priority to CN202310306014.5A priority Critical patent/CN116389791A/en
Publication of CN116389791A publication Critical patent/CN116389791A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a home scene generation method, a device, electronic equipment and a readable storage medium, comprising the following steps: acquiring a target video to be reconstructed; performing scene reconstruction based on an indoor image frame in the target video to obtain an initial home decoration scene, wherein the indoor image frame contains home decoration commodities, and the initial scene contains commodity models corresponding to the home decoration commodities; and correlating the commodity information of the home decoration commodity with the commodity model to obtain a target home decoration scene. The method can automatically screen out indoor image frames containing indoor scenes in videos, reconstruct home decoration scenes contained in the indoor scenes, correlate commodity information of home decoration commodities to the home decoration scenes, provide detailed commodity information of the home decoration commodities for users when the reconstructed target home decoration scenes are displayed, the users do not need to search the commodity information of the home decoration commodities in the home decoration scenes, and the operation of the users for acquiring the home decoration scenes is simplified.

Description

Home scene generation method and device, electronic equipment and readable storage medium
Technical Field
The application relates to the technical field of home scene generation, in particular to a home scene generation method, a device, electronic equipment and a readable storage medium.
Background
With the development of technology, the intelligent television has become the standard of each family, the hardware capability of the intelligent television is more and more powerful, the product experienced by the user on the television is more and more abundant, the video playing is the most commonly used function in the television, when the user watches videos such as dramas, movies and the like on the television, the user sometimes hopes to acquire the home decoration layout in the picture content and the information of the home decoration articles in the home decoration layout so as to conveniently adjust the home decoration layout of the room, but at present, the user can only record by photographing and the like, and then acquire the home decoration layout and the information of the home decoration articles by a searching method, so that the operation is very complicated.
Disclosure of Invention
The application provides a home scene generation method, a device, electronic equipment and a readable storage medium, and aims to solve the technical problem that an existing home scene generation method is complex in operation.
In a first aspect, the present application provides a home scene generating method, including:
acquiring a target video to be reconstructed;
performing scene reconstruction based on an indoor image frame in the target video to obtain an initial home decoration scene, wherein the indoor image frame contains home decoration commodities, and the initial scene contains commodity models corresponding to the home decoration commodities;
And correlating the commodity information of the home decoration commodity with the commodity model to obtain a target home decoration scene.
In one possible implementation manner of the present application, after associating the commodity information of the home decoration commodity with the commodity model to obtain the target home decoration scene, the method further includes:
and responding to a trigger instruction of a target commodity model in the target home decoration scene, and displaying target commodity information of target home decoration commodities associated with the target commodity model.
In one possible implementation manner of the present application, before the reconstructing the scene based on the indoor image frame in the target video to obtain the initial-home scene, the method further includes:
acquiring candidate image frames containing ceilings from each image frame in the target video;
identifying and obtaining a wall body in each candidate image frame;
and judging whether each candidate image frame is an indoor image frame according to the relative position relationship between the wall body and the ceiling, and obtaining a judging result.
In one possible implementation manner of the present application, the reconstructing a scene based on the indoor image frame in the target video to obtain an initial home decoration scene includes:
Three-dimensional modeling is carried out on home decoration commodities contained in indoor image frames in the target video, and a commodity model corresponding to the home decoration commodities, commodity sizes of the commodity models and a position relation among the commodity models are obtained;
constructing a room model for placing each commodity model based on the commodity size and the position relation of each commodity model, wherein the room model is composed of rendered building material models, and the building material models comprise wall models, ceiling models and/or ground models;
and based on the position relation, placing each commodity model in the room model to obtain an initial home decoration scene.
In one possible implementation manner of the present application, the building a room model for placing each commodity model based on the commodity size and the position relationship of each commodity model includes:
generating a minimum bounding box of each commodity model based on the commodity size and the position relation of each commodity model;
generating a model to be rendered based on the bounding box size of the minimum bounding box, wherein the model to be rendered is composed of building material models to be rendered;
Identifying and obtaining model materials of the building material model to be rendered from the indoor image frame;
rendering the building material model to be rendered based on the mold section bar to obtain a rendered building material model and a room model formed by the rendered building material model.
In one possible implementation of the present application, the merchandise information includes a merchandise purchase link, a merchandise model, and/or a brand.
In one possible implementation manner of the present application, the acquiring the target video to be reconstructed includes:
and responding to the scene generation instruction, and taking the currently played video as a target video to be reconstructed.
In a second aspect, the present application provides a home scene generating device, including:
the acquisition unit is used for acquiring a target video to be reconstructed;
the reconstruction unit is used for reconstructing a scene based on an indoor image frame in the target video to obtain an initial home decoration scene, wherein the indoor image frame contains home decoration commodities, and the initial scene contains commodity models corresponding to the home decoration commodities;
and the association unit is used for associating the commodity information of the home decoration commodity with the commodity model to obtain a target home decoration scene.
In a possible implementation manner of the present application, the association unit is further configured to:
and responding to a trigger instruction of a target commodity model in the target home decoration scene, and displaying target commodity information of target home decoration commodities associated with the target commodity model.
In a possible implementation of the present application, the reconstruction unit is configured to:
acquiring candidate image frames containing ceilings from each image frame in the target video;
identifying and obtaining a wall body in each candidate image frame;
and judging whether each candidate image frame is an indoor image frame according to the relative position relationship between the wall body and the ceiling, and obtaining a judging result.
In a possible implementation of the present application, the reconstruction unit is configured to:
three-dimensional modeling is carried out on home decoration commodities contained in indoor image frames in the target video, and a commodity model corresponding to the home decoration commodities, commodity sizes of the commodity models and a position relation among the commodity models are obtained;
constructing a room model for placing each commodity model based on the commodity size and the position relation of each commodity model, wherein the room model is composed of rendered building material models, and the building material models comprise wall models, ceiling models and/or ground models;
And based on the position relation, placing each commodity model in the room model to obtain an initial home decoration scene.
In a possible implementation of the present application, the reconstruction unit is configured to:
generating a minimum bounding box of each commodity model based on the commodity size and the position relation of each commodity model;
generating a model to be rendered based on the bounding box size of the minimum bounding box, wherein the model to be rendered is composed of building material models to be rendered;
identifying and obtaining model materials of the building material model to be rendered from the indoor image frame;
rendering the building material model to be rendered based on the mold section bar to obtain a rendered building material model and a room model formed by the rendered building material model.
In one possible implementation of the present application, the merchandise information includes a merchandise purchase link, a merchandise model, and/or a brand.
In a possible implementation manner of the present application, the obtaining unit is further configured to:
and responding to the scene generation instruction, and taking the currently played video as a target video to be reconstructed.
In a third aspect, the present application further provides an electronic device, where the electronic device includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and when the processor invokes the computer program in the memory, the processor executes steps in any of the home scene generation methods provided herein.
In a fourth aspect, the present application further provides a readable storage medium, where a computer program is stored, where the computer program when executed by a processor implements the steps in any of the home scene generating methods provided in the present application.
In summary, the home decoration scene generation method provided by the embodiment of the application includes: acquiring a target video to be reconstructed; performing scene reconstruction based on an indoor image frame in the target video to obtain an initial home decoration scene, wherein the indoor image frame contains home decoration commodities, and the initial scene contains commodity models corresponding to the home decoration commodities; and correlating the commodity information of the home decoration commodity with the commodity model to obtain a target home decoration scene.
Therefore, the home decoration scene generation method provided by the embodiment of the application can automatically screen indoor image frames containing indoor scenes in videos, reconstruct home decoration scenes contained in the indoor scenes, simplify the operation of acquiring the home decoration scenes by users, and correlate commodity information of home decoration commodities to the home decoration scenes.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of a home scenario generation method provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a home scene generating method provided in an embodiment of the present application;
fig. 3 is a schematic diagram of a home scenario provided in an embodiment of the present application;
FIG. 4 is a schematic flow chart of acquiring indoor image frames according to an embodiment of the present application;
fig. 5 is a schematic flow chart of acquiring an initial home decoration scene provided in an embodiment of the present application;
fig. 6 is a schematic block diagram of a home scene generating system provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of one embodiment of a home scene generating device provided in the embodiment of the present application;
fig. 8 is a schematic structural diagram of an embodiment of an electronic device provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In the description of the embodiments of the present application, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or an implicit indication of the number of features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known processes have not been described in detail in order to avoid unnecessarily obscuring descriptions of the embodiments of the present application. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed in the embodiments of the present application.
The embodiment of the application provides a home scene generation method, a home scene generation device, electronic equipment and a readable storage medium. The home scene generating device can be integrated in an electronic device, and the electronic device can be a server or a terminal and other devices.
The execution main body of the home scene generation method in the embodiment of the present application may be a home scene generation device provided in the embodiment of the present application, or different types of electronic devices such as a server device, a physical host, or a User Equipment (UE) integrated with the home scene generation device, where the home scene generation device may be implemented in a hardware or software manner, and the UE may specifically be a terminal device such as a smart phone, a tablet computer, a notebook computer, a palm computer, a desktop computer, or a personal digital assistant (Personal Digital Assistant, PDA).
The electronic device may be operated in a single operation mode, or may also be operated in a device cluster mode.
Referring to fig. 1, fig. 1 is a schematic view of a home scene generating system provided in an embodiment of the present application. The home scene generation system may include an electronic device 100, and a home scene generation apparatus is integrated in the electronic device 100.
In addition, as shown in fig. 1, the home scene generation system may further include a memory 200 for storing data, such as text data.
It should be noted that, the schematic view of the home scene generating system shown in fig. 1 is only an example, and the home scene generating system and the scene described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided in the embodiments of the present application, and those skilled in the art can know that, as the home scene generating system evolves and new service scenes appear, the technical solutions provided in the embodiments of the present invention are applicable to similar technical problems.
Next, a home scene generation method provided in an embodiment of the present application is described, in this embodiment of the present application, an electronic device is used as an execution body, and in order to simplify and facilitate description, in a subsequent method embodiment, the execution body is omitted, where the home scene generation method includes: acquiring a target video to be reconstructed; performing scene reconstruction based on an indoor image frame in the target video to obtain an initial home decoration scene, wherein the indoor image frame contains home decoration commodities, and the initial scene contains commodity models corresponding to the home decoration commodities; and correlating the commodity information of the home decoration commodity with the commodity model to obtain a target home decoration scene.
Referring to fig. 2, fig. 2 is a schematic flow chart of a home scene generating method according to an embodiment of the present application. It should be noted that although a logical order is depicted in the flowchart, in some cases the steps depicted or described may be performed in a different order than presented herein. The home scene generation method specifically comprises the following steps 201 to 203, wherein:
201. and obtaining a target video to be reconstructed.
For ease of understanding, the application background of the present application is first described, but is not to be construed as an admission of prior art: with the development of technology, the intelligent television has become the standard of each family, the hardware capability of the intelligent television is more and more powerful, the product experienced by the user on the television is more and more abundant, the video playing is the most commonly used function in the television, when the user watches videos such as dramas, movies and the like on the television, the user sometimes hopes to acquire the home decoration layout in the picture content and the information of the home decoration articles in the home decoration layout so as to conveniently adjust the home decoration layout of the room, but at present, the user can only record by photographing and the like, and then acquire the home decoration layout and the information of the home decoration articles by a searching method, so that the operation is very complicated.
Therefore, in order to solve the above problems, the present application provides a method for reconstructing a home decoration scene in a video, which can be applied to an electronic device such as a television, a projector, a mobile phone, etc. that can be used for playing the video.
The target video refers to a video of a home decoration scene contained therein to be reconstructed, for example, may refer to a television show, a movie, a short video, or the like of the home decoration scene contained therein to be reconstructed. For example, when the video is played by the electronic device, the user may take the played video as the target video by inputting an instruction. For example, when a user watches a television drama through a television, if the user sees the home decoration layout which is desired to be obtained from the television drama, the user can input an instruction through a remote controller of the television, and the television takes the currently played television drama as a target video to be rebuilt after receiving the input instruction. Namely, the step of acquiring the target video to be reconstructed comprises the following steps:
and responding to the scene generation instruction, and taking the currently played video as a target video to be reconstructed.
Or the television can also acquire the image frames currently played in the television play, screen the image frames used for reconstruction from the television play according to the preset time range and the corresponding playing time of the image frames in the television play, and take the video formed by the image frames used for reconstruction as a target video.
202. And performing scene reconstruction based on the indoor image frames in the target video to obtain an initial home decoration scene, wherein the indoor image frames contain home decoration commodities, and the initial scene contains commodity models corresponding to the home decoration commodities.
An indoor image frame refers to an image frame in a target video that contains an indoor scene. Because there may be both indoor image frames in the target video and outdoor image frames containing outdoor scenes, and the outdoor image frames do not contain the home layout that the user wants to reconstruct, the electronic device may color each image frame of the target video to obtain an indoor image frame in each image frame. For example, whether or not an indoor common object such as a teacup, a household appliance, a sofa, a bed, etc. exists in each image frame can be identified by a trained indoor object identification model, and the image frame in which the object exists is taken as an indoor image frame. The indoor article identification model may be obtained by training an AlexNet or other open-source target identification model/target detection model, which is not limited in this embodiment of the present application.
The initial home decoration scene is a scene which is obtained after scene reconstruction and is composed of three-dimensional models (namely commodity models) corresponding to all home decoration commodities in an indoor image frame, and the initial home decoration scene comprises the three-dimensional models corresponding to all home decoration commodities and the position relation among all the three-dimensional models.
The method for reconstructing the scene is not limited, for example, point clouds corresponding to indoor image frames can be generated through SFM (Structure From Motion) algorithm and MVS (Multiple View Stereo) algorithm, and then three-dimensional models corresponding to all household commodities are generated based on the point clouds corresponding to all household commodities. It can be understood that the surface of the three-dimensional model obtained by reconstruction is rendered with the surface material corresponding to the home decoration commodity.
In some embodiments, in order to avoid that an indoor image frame in a target video only shoots an indoor scene with a partial angle, which leads to inaccurate initial home decoration scene obtained by reconstruction, after the indoor image frame in the target video is identified, a multi-view image frame corresponding to the indoor image frame is generated, and the generated multi-view image frame is newly added to the indoor image frame, so as to obtain a new indoor image frame. Illustratively, a multi-view image frame corresponding to the indoor image frame may be generated by generating a countermeasure network (Generative Adversarial Networks, GAN).
In other embodiments, in order to avoid that the person photographed in the indoor image frame affects the accuracy of scene reconstruction, an image area corresponding to the person in the indoor image frame may be identified, and after deleting the image content (i.e. the person) in the image area, the image content in the image area is replenished. For example, the image Content may be complemented by a Content-aware fill (Content-aware fill) algorithm, and specific processes are not described in detail.
203. And correlating the commodity information of the home decoration commodity with the commodity model to obtain a target home decoration scene.
The commodity information of the home-made commodity may refer to at least one of a brand of the home-made commodity, a commodity model, and a commodity purchase link. For example, the electronic device may identify each of the home-packaged goods in the indoor image frame, and search for the goods information of each of the home-packaged goods through an image search function of a preset search engine.
In performing step 203, associating may refer to displaying the merchandise information of the home-packaged merchandise in the device screen area corresponding to the merchandise model. For ease of understanding, referring to fig. 3, fig. 3 shows a target home scene 301 in which a plurality of commodity models 303 (commodity information and commodity models are not all shown) associated with commodity information 302 are contained.
After the target home decoration scene is obtained, the electronic equipment can store the file corresponding to the target home decoration scene in a preset database, and a user can open the file corresponding to the target home decoration scene at any time so as to view the target home decoration scene.
In summary, the home decoration scene generation method provided by the embodiment of the application includes: acquiring a target video to be reconstructed; performing scene reconstruction based on an indoor image frame in the target video to obtain an initial home decoration scene, wherein the indoor image frame contains home decoration commodities, and the initial scene contains commodity models corresponding to the home decoration commodities; and correlating the commodity information of the home decoration commodity with the commodity model to obtain a target home decoration scene.
Therefore, the home decoration scene generation method provided by the embodiment of the application can automatically screen indoor image frames containing indoor scenes in videos, reconstruct home decoration scenes contained in the indoor scenes, simplify the operation of acquiring the home decoration scenes by users, and correlate commodity information of home decoration commodities to the home decoration scenes.
In some embodiments, in order to reduce the amount of information displayed in the target home decoration scene, avoid the user from being confused by excessive information, and in order to avoid that the displayed commodity information blocks the commodity model in the target home decoration scene, or blocks other commodity information, when the commodity information and the commodity model are associated, only the commodity model in the target home decoration scene is set to be triggered by the user, and then the commodity information associated with the commodity model is displayed. Namely, the step of associating the commodity information of the home decoration commodity with the commodity model to obtain a target home decoration scene, further comprises:
And responding to a trigger instruction of a target commodity model in the target home decoration scene, and displaying target commodity information of target home decoration commodities associated with the target commodity model.
The trigger instruction may be input by a user through a remote controller or the like, or may also be input through a touch screen or the like, which is not limited in the embodiment of the present application. For example, when the electronic device is a television, after a file corresponding to a target home decoration scene is opened, so that after the target home decoration scene is displayed on a television screen, a user can select a target commodity model from commodity models contained in the target home decoration scene through a moving key on a remote controller, a trigger instruction for the target commodity model is input through a confirmation key on the remote controller, and after the trigger instruction for the target commodity model is received, the television can display target commodity information associated with the target commodity model on the television screen.
Although steps 201-203 provide a method for acquiring indoor image frames from a target video, the method has certain limitations and cannot exclude the case that outdoor scenes also contain home-made goods. Therefore, in order to improve the accuracy of acquiring the indoor image frames, the present application further provides a method, referring to fig. 4, the step of "reconstructing a scene based on the indoor image frames in the target video to obtain an initial home decoration scene" further includes:
401. Candidate image frames including a ceiling are acquired from each image frame in the target video.
For example, an open source object recognition model/object detection model such as AlexNet may be trained to obtain a building material detection model, and whether or not a ceiling is included in each image frame is detected by the building material detection model to obtain a candidate image frame including the ceiling.
It should be noted that, in the embodiment of the present application, the specific position of the ceiling in the candidate image frame needs to be acquired, so that the target detection model may be selected as a model to be trained, and is trained to obtain a building material detection model, and the subsequent step in the embodiment of the present application further needs to acquire the position of the wall in the candidate image frame, so that when the target detection model is trained, the capability of the target detection model to detect the wall may be trained simultaneously, so as to obtain a building material detection model capable of detecting the ceiling and the wall simultaneously.
402. And identifying and obtaining the wall body in each candidate image frame.
For example, the wall in the candidate image frame may be detected by the building material detection model mentioned in step 401, which is not described in detail.
403. And judging whether each candidate image frame is an indoor image frame according to the relative position relationship between the wall body and the ceiling, and obtaining a judging result.
In the embodiment of the present application, the reason why the indoor image frame is determined according to the relative positional relationship between the wall and the ceiling is that if the candidate image frame is an indoor image frame, a specific structural positional relationship is presented between the wall and the ceiling, for example, in the indoor image frame, since the wall and the ceiling generally belong to the same room, the wall and the ceiling intersect, and the presented angle is generally close to a right angle, whereas in the outdoor image frame, since the wall and the ceiling generally belong to different buildings and not belong to the same room, the wall and the ceiling do not generally intersect, even if the wall and the ceiling intersect in the outdoor image frame due to the view angle, the probability that the presented angle is exactly close to the right angle is not high, and thus the method of step 403 can accurately identify the indoor image frame in the candidate image frame.
The target home decoration scene obtained in step 201-step 203 only contains commodity information of home decoration commodities and a positional relationship between home decoration commodities, but a user cannot obtain a visual effect when the home decoration commodities are actually placed in a room based on the target home decoration scene obtained in step 201-step 203, and the practicability is not high. In order to solve the problem, the application provides a method, which can build a room model formed by a rendered wall model and a rendered ceiling model after modeling home decoration commodity, and put the modeled commodity model into the room model to obtain a target home decoration scene formed by the commodity model and the room model together. Referring to fig. 5, the step of performing scene reconstruction based on the indoor image frame in the target video to obtain an initial home decoration scene includes:
501. And carrying out three-dimensional modeling on the home decoration commodity contained in the indoor image frame in the target video to obtain a commodity model corresponding to the home decoration commodity, commodity dimensions of the commodity models and the position relation among the commodity models.
The three-dimensional modeling method may refer to the description of step 202, for example, the SFM algorithm and the MVS algorithm may be combined to obtain the commodity model corresponding to the home decoration commodity, and the positional relationship between the commodity models.
502. And constructing a room model for placing the commodity models based on commodity sizes of the commodity models and the position relations, wherein the room model is composed of rendered building material models, and the building material models comprise wall models, ceiling models and/or ground models.
The room model is a model for placing the outline of a room of each commodity model, and is composed of at least one of a rendered wall model, a rendered ceiling model, and a rendered floor model, and for example, in fig. 3, the room model is composed of a rendered wall model and a rendered floor model.
The purpose of constructing the room model in step 502 is to provide the user with a visual effect when the home decoration commodity is actually placed in the room, and it can be understood that, since the room model is formed by the rendered building material model, that is, the building material model is already provided with corresponding materials, when the user opens the file corresponding to the target home decoration scene to display the target home decoration scene on the screen of the electronic device, the visual collocation effect between the home decoration commodity and the texture on the building material model can be seen.
For example, a room model to be rendered may be first constructed, and then the materials of the building material model are identified from the indoor image frames, and the room model to be rendered is rendered to obtain a rendered room model. For example, the step of "constructing a room model for placing each of the commodity models based on the positional relationship" may be performed by:
(1.1) generating a minimum bounding box of each of the commodity models based on the positional relationship.
The minimum bounding box refers to the smallest three-dimensional volume that can enclose each commodity model. For example, the shape of the minimum bounding box may be set to be a three-dimensional rectangle, any commodity model is taken as a reference point, and the minimum three-dimensional rectangle that can enclose each commodity model is determined based on the positional relationship between the commodity model and any commodity model other than any commodity model in the positional relationship, and the commodity size of the commodity model, and the minimum three-dimensional rectangle is taken as the minimum bounding box of each commodity model.
The embodiments of the present application are not limited to the algorithm for specifically generating the minimum bounding box, and for example, an Axis alignment bounding box (Axis-aligned bounding box) algorithm, a directed bounding box (Oriented bounding box) algorithm, and the like may be used.
(1.2) generating a model to be rendered based on a bounding box size of the minimum bounding box, wherein the model to be rendered is composed of a building material model to be rendered.
The model to be rendered refers to a room model to be rendered.
For ease of understanding, the present embodiments take the example that the minimum bounding box is a three-dimensional rectangle, and the bounding box dimensions may include the length, width, and height of the three-dimensional rectangle.
It can be understood that, based on the bounding box size of the minimum bounding box, the position of the building material to be generated can be determined, for example, the generating position of the ceiling model can be determined according to the area of the top of the minimum bounding box, that is, the product of the length and the width in the bounding box size and the height in the bounding box size, and the wall model and the ground model are the same and will not be described again.
(1.3) identifying model materials of the building material model to be rendered from the indoor image frames.
The method for identifying the material is not limited in the embodiment of the application. For example, the model material of the building material model in the indoor image frame can be identified through the trained material identification model. For example, the open source SENet model may be trained to obtain a trained material recognition model, and the specific process is not described again.
(1.4) rendering the building material model to be rendered based on the mold section bar, and obtaining a rendered building material model and a room model formed by the rendered building material model.
After rendering, the material can be endowed to the building material model to be rendered, so as to obtain a rendered building material model and a room model formed by the rendered building material model.
503. And based on the position relation, placing each commodity model in the room model to obtain an initial home decoration scene.
The application also provides a home scene generation system which can be used for generating the target home scene. Referring to fig. 6, a block diagram of a home scene generation system 600 is shown in fig. 6, the home scene generation system 600 includes a scene recognition module 601, a three-dimensional modeling module 602, a home item recognition module 603, a layout recognition module 604, and a model save module 605.
The scene recognition module 601 is configured to recognize an indoor image frame from image frames contained in a target video.
The three-dimensional modeling module 602 is configured to perform three-dimensional modeling on home decoration commodities included in the indoor image frame, obtain commodity models corresponding to the home decoration commodities, and construct a model to be rendered.
The home article identification module 603 is configured to obtain article information of the home-made article.
The layout identifying module 604 is configured to identify model materials of a building material model to be rendered from the indoor image frame, render the model materials to the building material model to be rendered, obtain a rendered room model, and generate an initial home decoration scene based on the room model and each commodity model.
The model saving module 605 is configured to associate the commodity information of the home decoration commodity with a commodity model corresponding to the home decoration commodity in the initial home decoration scene, obtain a target home decoration scene, and store the target home decoration scene in a preset database, where the database may refer to a background database of the electronic device, for example, when the electronic device refers to a television, the database may refer to a background database of the television.
In actual use, the home scene generation system 600 may operate according to the following procedure:
(A) When watching a video, a user sees a home decoration scene to be acquired, at this time, the user inputs a scene generation instruction to the home scene generation system 600, and triggers the scene recognition module 601;
(B) The scene recognition module 601 recognizes an indoor image frame from a currently played video, and inputs the indoor image frame into the three-dimensional modeling module 602;
(C) The three-dimensional modeling module 602 performs three-dimensional modeling on the home decoration commodities contained in the indoor image frames to obtain commodity models corresponding to all the home decoration commodities, constructs a to-be-rendered model, inputs the commodity models corresponding to all the home decoration commodities into the layout recognition module 604;
(D) The layout identification module 604 identifies model materials of the building material model to be rendered from the indoor image frame, renders the model materials to the building material model to be rendered, obtains a rendered room model, generates an initial home decoration scene based on the room model and each commodity model, and inputs the initial home decoration scene into the model storage module 605;
(E) The household article identification module 603 searches to obtain commodity information of the household commodity, and inputs the commodity information into the model storage module 605;
(F) The model saving module 605 associates the commodity information of the home decoration commodity with the commodity model corresponding to the home decoration commodity in the initial home decoration scene to obtain a target home decoration scene, and stores the target home decoration scene in a preset database.
In order to better implement the home scene generation method in the embodiment of the present application, based on the home scene generation method, the embodiment of the present application further provides a home scene generation device, as shown in fig. 7, which is a schematic structural diagram of one embodiment of the home scene generation device in the embodiment of the present application, where the home scene generation device 700 includes:
An acquisition unit 701, configured to acquire a target video to be reconstructed;
a reconstruction unit 702, configured to perform scene reconstruction based on an indoor image frame in the target video, to obtain an initial home decoration scene, where the indoor image frame includes a home decoration commodity, and the initial scene includes a commodity model corresponding to the home decoration commodity;
and the associating unit 703 is configured to associate the commodity information of the home decoration commodity with the commodity model, so as to obtain a target home decoration scene.
In a possible implementation of the present application, the association unit 703 is further configured to:
and responding to a trigger instruction of a target commodity model in the target home decoration scene, and displaying target commodity information of target home decoration commodities associated with the target commodity model.
In a possible implementation of the present application, the reconstruction unit 702 is configured to:
acquiring candidate image frames containing ceilings from each image frame in the target video;
identifying and obtaining a wall body in each candidate image frame;
and judging whether each candidate image frame is an indoor image frame according to the relative position relationship between the wall body and the ceiling, and obtaining a judging result.
In a possible implementation of the present application, the reconstruction unit 702 is configured to:
three-dimensional modeling is carried out on home decoration commodities contained in indoor image frames in the target video, and a commodity model corresponding to the home decoration commodities, commodity sizes of the commodity models and a position relation among the commodity models are obtained;
constructing a room model for placing each commodity model based on the commodity size and the position relation of each commodity model, wherein the room model is composed of rendered building material models, and the building material models comprise wall models, ceiling models and/or ground models;
and based on the position relation, placing each commodity model in the room model to obtain an initial home decoration scene.
In a possible implementation of the present application, the reconstruction unit 702 is configured to:
generating a minimum bounding box of each commodity model based on the commodity size and the position relation of each commodity model;
generating a model to be rendered based on the bounding box size of the minimum bounding box, wherein the model to be rendered is composed of building material models to be rendered;
identifying and obtaining model materials of the building material model to be rendered from the indoor image frame;
Rendering the building material model to be rendered based on the mold section bar to obtain a rendered building material model and a room model formed by the rendered building material model.
In one possible implementation of the present application, the merchandise information includes a merchandise purchase link, a merchandise model, and/or a brand.
In a possible implementation manner of the present application, the acquiring unit 701 is further configured to:
and responding to the scene generation instruction, and taking the currently played video as a target video to be reconstructed.
In the implementation, each module may be implemented as an independent entity, or may be combined arbitrarily, and implemented as the same entity or several entities, and the implementation of each module may be referred to the foregoing method embodiment, which is not described herein again.
The home scene generating device can execute steps in the home scene generating method in any embodiment, so that the beneficial effects of the home scene generating method in any embodiment of the application can be realized, and detailed descriptions are omitted herein.
In addition, in order to better implement the home scene generation method in the embodiment of the present application, on the basis of the home scene generation method, the embodiment of the present application further provides an electronic device, referring to fig. 8, fig. 8 shows a schematic structural diagram of the electronic device in the embodiment of the present application, and specifically, the electronic device provided in the embodiment of the present application includes a processor 801, where the processor 801 is configured to implement each step of the home scene generation method in any embodiment when executing a computer program stored in a memory 802; alternatively, the processor 801 is configured to implement the functions of the respective modules in the corresponding embodiment of fig. 7 when executing the computer program stored in the memory 802.
By way of example, a computer program may be partitioned into one or more modules/units that are stored in memory 802 and executed by processor 801 to accomplish the embodiments of the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing particular functions to describe the execution of the computer program in a computer device.
Electronic devices may include, but are not limited to, processor 801, memory 802. It will be appreciated by those skilled in the art that the illustrations are merely examples of electronic devices and are not limiting of electronic devices, and may include more or fewer components than illustrated, or may combine certain components, or different components.
The processor 801 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center for an electronic device, with various interfaces and lines connecting various parts of the overall electronic device.
The memory 802 may be used to store computer programs and/or modules, and the processor 801 implements various functions of the computer device by running or executing the computer programs and/or modules stored in the memory 802 and invoking data stored in the memory 802. The memory 802 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the electronic device, and the like. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the above-described home scene generating device, electronic device and corresponding modules thereof may refer to the description of the home scene generating method in any embodiment, and the details are not repeated here.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions or by controlling associated hardware, which may be stored on a readable storage medium and loaded and executed by a processor.
For this reason, the embodiment of the present application provides a readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the home scene generating method in any embodiment of the present application are executed, and specific operations may refer to the description of the home scene generating method in any embodiment.
Wherein the readable storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in the home scene generation method in any embodiment of the present application may be executed due to the instructions stored in the readable storage medium, so that the beneficial effects that the home scene generation method in any embodiment of the present application may be achieved, which is described in detail in the foregoing, and will not be repeated here.
The foregoing describes in detail a home scenario generating method, apparatus, storage medium and electronic device provided in the embodiments of the present application, and specific examples are applied to describe the principles and implementations of the present application, where the descriptions of the foregoing embodiments are only used to help understand the method and core idea of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. A home decoration scene generation method, comprising:
acquiring a target video to be reconstructed;
performing scene reconstruction based on an indoor image frame in the target video to obtain an initial home decoration scene, wherein the indoor image frame contains home decoration commodities, and the initial scene contains commodity models corresponding to the home decoration commodities;
and correlating the commodity information of the home decoration commodity with the commodity model to obtain a target home decoration scene.
2. The home decoration scene generating method according to claim 1, wherein after associating the commodity information of the home decoration commodity with the commodity model to obtain a target home decoration scene, further comprising:
and responding to a trigger instruction of a target commodity model in the target home decoration scene, and displaying target commodity information of target home decoration commodities associated with the target commodity model.
3. The home decoration scene generation method according to claim 1, wherein before the scene reconstruction is performed based on the indoor image frame in the target video to obtain the initial home decoration scene, the method further comprises:
acquiring candidate image frames containing ceilings from each image frame in the target video;
Identifying and obtaining a wall body in each candidate image frame;
and judging whether each candidate image frame is an indoor image frame according to the relative position relationship between the wall body and the ceiling, and obtaining a judging result.
4. The home decoration scene generation method according to claim 1, wherein the performing scene reconstruction based on the indoor image frame in the target video to obtain an initial home decoration scene comprises:
three-dimensional modeling is carried out on home decoration commodities contained in indoor image frames in the target video, and a commodity model corresponding to the home decoration commodities, commodity sizes of the commodity models and a position relation among the commodity models are obtained;
constructing a room model for placing each commodity model based on the commodity size and the position relation of each commodity model, wherein the room model is composed of rendered building material models, and the building material models comprise wall models, ceiling models and/or ground models;
and based on the position relation, placing each commodity model in the room model to obtain an initial home decoration scene.
5. The home decoration scene generation method according to claim 4, wherein the constructing a room model for placing each commodity model based on the commodity size and the positional relationship of each commodity model comprises:
Generating a minimum bounding box of each commodity model based on the commodity size and the position relation of each commodity model;
generating a model to be rendered based on the bounding box size of the minimum bounding box, wherein the model to be rendered is composed of building material models to be rendered;
identifying and obtaining model materials of the building material model to be rendered from the indoor image frame;
rendering the building material model to be rendered based on the mold section bar to obtain a rendered building material model and a room model formed by the rendered building material model.
6. The home decoration scenario generation method according to claim 1, wherein the commodity information includes a commodity purchase link, a commodity model and/or a brand.
7. The home decoration scene generation method according to any one of claims 1 to 6, wherein the obtaining the target video to be reconstructed includes:
and responding to the scene generation instruction, and taking the currently played video as a target video to be reconstructed.
8. A home scene generation apparatus, comprising:
the acquisition unit is used for acquiring a target video to be reconstructed;
The reconstruction unit is used for reconstructing a scene based on an indoor image frame in the target video to obtain an initial home decoration scene, wherein the indoor image frame contains home decoration commodities, and the initial scene contains commodity models corresponding to the home decoration commodities;
and the association unit is used for associating the commodity information of the home decoration commodity with the commodity model to obtain a target home decoration scene.
9. An electronic device comprising a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the steps in the home scene generation method according to any of claims 1 to 7 when the computer program is executed by the processor.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the home scene generation method of any of claims 1 to 7.
CN202310306014.5A 2023-03-21 2023-03-21 Home scene generation method and device, electronic equipment and readable storage medium Pending CN116389791A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310306014.5A CN116389791A (en) 2023-03-21 2023-03-21 Home scene generation method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310306014.5A CN116389791A (en) 2023-03-21 2023-03-21 Home scene generation method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116389791A true CN116389791A (en) 2023-07-04

Family

ID=86972491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310306014.5A Pending CN116389791A (en) 2023-03-21 2023-03-21 Home scene generation method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116389791A (en)

Similar Documents

Publication Publication Date Title
US10679061B2 (en) Tagging virtualized content
US11422671B2 (en) Defining, displaying and interacting with tags in a three-dimensional model
CN114119839B (en) Three-dimensional model reconstruction and image generation method, equipment and storage medium
CN112771536B (en) Augmented reality digital content search and sizing techniques
CN114119838B (en) Voxel model and image generation method, equipment and storage medium
US11055916B2 (en) Virtualizing content
US20080184121A1 (en) Authoring tool for providing tags associated with items in a video playback
KR102433857B1 (en) Device and method for creating dynamic virtual content in mixed reality
US20090316989A1 (en) Method and electronic device for creating an image collage
KR20130126545A (en) Comparing virtual and real images in a shopping experience
US20150199995A1 (en) Modular content generation, modification, and delivery system
CN110321048A (en) The processing of three-dimensional panorama scene information, exchange method and device
WO2017206746A1 (en) Method, apparatus and system for generating collocation renderings
TW201525739A (en) Presenting information based on a video
CN114332417A (en) Method, device, storage medium and program product for multi-person scene interaction
CN115639981A (en) Digital archive implementation method and system based on 3DGIS
TW202008781A (en) Generating method and playing method of multimedia file, multimedia file generation apparatus and multimedia file playback apparatus
US20160042233A1 (en) Method and system for facilitating evaluation of visual appeal of two or more objects
CN116389791A (en) Home scene generation method and device, electronic equipment and readable storage medium
Ng et al. Syntable: A synthetic data generation pipeline for unseen object amodal instance segmentation of cluttered tabletop scenes
US11562522B2 (en) Method and system for identifying incompatibility between versions of compiled software code
Saran et al. Augmented annotations: Indoor dataset generation with augmented reality
Zhang et al. Sceneviewer: Automating residential photography in virtual environments
WO2023125245A1 (en) Asset management method and apparatus, and electronic device and storage medium
CN115100327B (en) Method and device for generating animation three-dimensional video and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination