CN107643890A - Scene of game construction method and device - Google Patents
Scene of game construction method and device Download PDFInfo
- Publication number
- CN107643890A CN107643890A CN201710677570.8A CN201710677570A CN107643890A CN 107643890 A CN107643890 A CN 107643890A CN 201710677570 A CN201710677570 A CN 201710677570A CN 107643890 A CN107643890 A CN 107643890A
- Authority
- CN
- China
- Prior art keywords
- scene
- game
- models
- target scene
- present
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The present invention proposes a kind of scene of game construction method and device, wherein, method includes:Obtain the depth information for carrying the target scene for attempting to build in gaming;Wherein, depth information generates after carrying out structure light image processing to actual scene;The 3D models of target scene are built according to depth information, and 3D models are implanted in game.This method builds 3D models according to the depth information of actual scene, and the 3D models of structure are implanted in game, due to the 3D models of actual scene, reality is more nearly than the object model in game making software, therefore the 3D models of actual scene are implanted into game, improve the substitution sense of scene of game.
Description
Technical field
The present invention relates to field of terminal equipment, more particularly to a kind of scene of game construction method and device.
Background technology
At present, various game emerge in an endless stream, and game has become a part for people's amusement and leisure.Especially it is three-dimensional game
Play, because its scene of game more conforms to the visual effect of people, is increasingly favored by people.
In the related art, scene of game is built typically by game making software.For example, want playing
Lake, tree etc. are created in scene, lake, the model of tree can be selected in software, and the depth in lake is set by attribute setting
Degree, the height set etc..
But because the object in scene of game is to set to create by model and attribute in correlation technique, therefore
Feel the problem of poor in the presence of substituting into.
The content of the invention
It is contemplated that at least solves one of technical problem in correlation technique to a certain extent.
Therefore, the present invention proposes a kind of scene of game construction method, the 3D models of actual scene are implanted into game to realize
In, structure, which substitutes into, feels stronger scene of game, and solves existing to pass through software building scene of game, existing scene of game generation
Enter to feel the problem of poor.
The present invention proposes a kind of scene of game construction device.
The present invention proposes a kind of terminal device.
The present invention proposes a kind of computer-readable recording medium.
First aspect present invention embodiment proposes a kind of scene of game construction method, including:
Obtain the depth information for carrying the target scene for attempting to build in gaming;Wherein, the depth information is to reality
Generated after the scene progress structure light image processing of border;
The 3D models of the target scene are built according to the depth information, and the 3D models are implanted to the game
In.
It is described that the 3D models are implanted to the game as a kind of optional implementation of first aspect embodiment
In, including:
The description information of the 3D models is formed according to the 3D models, and keyword is extracted from the description information;
Judge whether first scene consistent with the target scene be present in the game according to the keyword;
If it is judged that first scene be present for the game, then by the 3D models of the target scene
It is implanted in the game.
It is described to be judged according to the keyword in the game as a kind of optional implementation of first aspect embodiment
Whether with the target scene consistent first scene be present, including:
The keyword for obtaining each scene to be prestored in the game forms set of keywords;
Searched whether in the set of keywords in the presence of consistent or similar with the keyword of the target scene
Keyword;
If the pass consistent or similar with the keyword of the target scene is not present in the set of keywords
Key word, then judge first scene is not present in the game.
As a kind of optional implementation of first aspect embodiment, if existed and the mesh in the set of keywords
The consistent or similar keyword of the keyword of scene is marked, then judges first scene in the game be present;
By the 3D models according to the position where first scene, it is implanted in the game and replaces described first
Scape.
It is described that the 3D models are implanted to the game as a kind of optional implementation of first aspect embodiment
In, including:
Position selection operation to user is monitored;
According to the position selection operation of monitoring, positional information of the 3D models in the game is determined;
The 3D models are implanted in the game according to the positional information.
As a kind of optional implementation of first aspect embodiment, the acquisition, which carries, attempts what is built in gaming
Before the depth information of target scene, including:
To the target scene emitting structural light of reality;
Gather reflected light of the structure light on the target scene and form the depth information of target scene.
As a kind of optional implementation of first aspect embodiment, the structure light is structure light heterogeneous, described
The speckle pattern or random dot pattern that structure light heterogeneous is formed for the set of multiple hot spots, are the projections by being arranged in terminal
What the diffraction optical element in device was formed, wherein, a number of embossment is provided with the diffraction optical element, it is described floating
The depth of groove of carving is different.
The scene of game construction method of the embodiment of the present invention, the target field for attempting to build in gaming is carried by obtaining
The depth information of scape, wherein, depth information generates after carrying out structure light image processing to actual scene, according to depth information structure
The 3D models of target scene are built, and 3D models are implanted in game.In the present embodiment, according to the depth information structure of actual scene
3D models are built, and the 3D models of structure are implanted in game, due to the 3D models of actual scene, than in game making software
Object model is more nearly reality, therefore the 3D models of actual scene are implanted into game, improves the substitution sense of scene of game, solves
Determine existing by software building scene of game, existing scene of game, which substitutes into, feels the problem of poor.
Second aspect of the present invention embodiment proposes a kind of scene of game construction device, including:
Acquisition module, the depth information for the target scene for attempting to build in gaming is carried for obtaining;Wherein, it is described
Depth information generates after carrying out structure light image processing to actual scene;
Implant module, for building the 3D models of the target scene according to the depth information, and by the 3D models
It is implanted in the game.
As a kind of optional implementation of second aspect embodiment, the implant module, including:
Extraction unit, for forming the description information of the 3D models according to the 3D models, and from the description information
Middle extraction keyword;
Judging unit, for judging whether existed and the target scene one in the game according to the keyword
The first scene caused;
Be implanted into unit, for when judged result for it is described game first scene not be present when, by the target scene
The 3D models be implanted in the game.
As a kind of optional implementation of second aspect embodiment, the judging unit, it is additionally operable to:
The keyword for obtaining each scene to be prestored in the game forms set of keywords;
Searched whether in the set of keywords in the presence of consistent or similar with the keyword of the target scene
Keyword;
If the pass consistent or similar with the keyword of the target scene is not present in the set of keywords
Key word, then judge first scene is not present in the game.
As a kind of optional implementation of second aspect embodiment, the implantation unit, it is additionally operable to:
If the key consistent or similar with the keyword of the target scene in the set of keywords be present
Word, then judge first scene in the game be present;
By the 3D models according to the position where first scene, it is implanted in the game and replaces described first
Scape.
As a kind of optional implementation of second aspect embodiment, the implant module, it is additionally operable to:
Position selection operation to user is monitored;
According to the position selection operation of monitoring, positional information of the 3D models in the game is determined;
The 3D models are implanted in the game according to the positional information.
As a kind of optional implementation of second aspect embodiment, scene of game construction device, in addition to:
Transmitter module, for reality the target scene emitting structural light;
Module is formed, for gathering reflected light of the structure light on the target scene and forming the depth of target scene
Spend information.
As a kind of optional implementation of second aspect embodiment, the structure light is structure light heterogeneous, described
The speckle pattern or random dot pattern that structure light heterogeneous is formed for the set of multiple hot spots, are the projections by being arranged in terminal
What the diffraction optical element in device was formed, wherein, a number of embossment is provided with the diffraction optical element, it is described floating
The depth of groove of carving is different.
The scene of game construction device of the embodiment of the present invention, the target field for attempting to build in gaming is carried by obtaining
The depth information of scape, wherein, depth information generates after carrying out structure light image processing to actual scene, according to depth information structure
The 3D models of target scene are built, and 3D models are implanted in game.In the present embodiment, according to the depth information structure of actual scene
3D models are built, and the 3D models of structure are implanted in game, due to the 3D models of actual scene, than in game making software
Object model is more nearly reality, therefore the 3D models of actual scene are implanted into game, improves the substitution sense of scene of game, solves
Determine existing by software building scene of game, existing scene of game, which substitutes into, feels the problem of poor.
Third aspect present invention embodiment proposes a kind of terminal device, including memory and processor, the memory
In store computer-readable instruction, when the instruction is by the computing device so that the computing device such as first party
Scene of game construction method described in the embodiment of face.
Fourth aspect present invention embodiment proposes a kind of computer-readable recording medium.One or more includes computer
The non-volatile computer readable storage medium storing program for executing of executable instruction, when the computer executable instructions are handled by one or more
When device performs so that scene of game construction method of the computing device as described in first aspect embodiment.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Substantially and it is readily appreciated that, wherein:
Fig. 1 is a kind of schematic flow sheet of scene of game construction method provided in an embodiment of the present invention;
Fig. 2 is the device combination diagram of a projective structure light provided in an embodiment of the present invention;
Fig. 3 is the schematic diagram of various forms of structure lights provided in an embodiment of the present invention;
Fig. 4 is the schematic flow sheet of another scene of game construction method provided in an embodiment of the present invention;
Fig. 5 is the schematic flow sheet of another scene of game construction method provided in an embodiment of the present invention;
Fig. 6 is a kind of projection set schematic diagram of structure light heterogeneous provided in an embodiment of the present invention;
Fig. 7 is a kind of structural representation of scene of game construction device provided in an embodiment of the present invention;
Fig. 8 is the structural representation of another scene of game construction device provided in an embodiment of the present invention;
Fig. 9 is the schematic diagram of the image processing circuit in a kind of terminal device provided in an embodiment of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings the scene of game construction method and device of the embodiment of the present invention are described.
In the related art, scene of game is built typically by game making software.But due to game
Object in scene is to set to create by model and attribute, therefore exists and substitute into the problem of sense is poor.
For this problem, the embodiment of the present invention proposes a kind of scene of game construction method, to realize actual scene
In the implantation game of 3D models, structure substitutes into the scene of game for feeling stronger.
Fig. 1 is a kind of schematic flow sheet of scene of game construction method provided in an embodiment of the present invention.
As shown in figure 1, the scene of game construction method comprises the following steps:
Step 101, the depth information for carrying the target scene for attempting to build in gaming is obtained.
Wherein, depth information generates after carrying out structure light image processing to actual scene.
It is known that the collection of the projection of direction in space light is collectively referred to as structure light.
As a kind of example, Fig. 2 is the device combination diagram of a projective structure light.Only with the throwing of structure light in Fig. 2
The set that photograph album is combined into line carries out example, and the principle that the structure light of speckle pattern is combined into for set of projections is similar.As shown in Fig. 2
Optical projection device and video camera can be included in the device, wherein, optical projection device is by the project structured light of certain pattern in quilt
Survey in the space at object, form the image modulated by the shape of body surface on a surface of an.The image is by another
The camera detection of one position, so as to obtain structure light image.
After structure light image is obtained, the optical strip image of measured object is gathered from the structure light image, by the light of measured object
Bar image obtains each light of optical strip image on measured object with carrying out view data calculating according to pre-defined algorithm with reference to optical strip image
Bar is relative to the displacement with reference to the reference striation in optical strip image.The each of optical strip image is calculated using trigonometry conversion
The depth value of individual striation, and it is worth to according to the depth depth information of measured object.
As a kind of example, the type of structure light includes grating type, spot type, speckle type (including circular speckle and cross
Speckle), as shown in figure 3, what said structure was just uniformly arranged.Accordingly, the equipment of generating structure light can be by luminous point,
Certain projector equipment or the instrument that line, grating, grid or speckle are projected on testee, such as optical projection device, can also
It is the laser for generating laser beam.
In the present embodiment, the structure light image of actual scene can be handled, obtain the depth information of actual scene, enter
And obtain the depth information for the target scene for attempting to build in gaming.
Such as, it is intended to a house scene is built in certain game, can first obtain an actual house scene
Structure light image, after structure light image processing, obtain the depth information of actual house scene.
Video camera in the embodiment of the present invention can be the rear camera of terminal such as mobile phone, palm PC etc..Thus, when
When picking up terminal alignment actual scene, you can call the optical projection device and camera in terminal, obtain the structure of actual scene
Light image, by handling the structure light image of actual scene, the depth information of target scene can be obtained.
Step 102, the 3D models of target scene are built according to depth information, and 3D models are implanted in game.
In the present embodiment, the color information for the actual scene that can gather depth information with camera is merged, and is obtained
The 3D models of target scene.Specifically, the feature of actual scene is extracted respectively from depth information and color information.Then, will
The feature of the actual scene extracted from depth information and the feature of the actual scene extracted from color information carry out registration and
Fusion Features processing, finally according to the feature after fusion, generate 3D models.
After the 3D models of target scene have been built, 3D models are implanted in game., can be by target as a kind of example
The 3D models of scene are directly placed into game.User can be by carrying out drag operation, by the 3D models of target scene on screen
Drag to suitable position in scene of game.The drag operation of terminal-pair user is monitored, and is stopped monitoring drag operation
When, the region corresponding to drag operation stop position is identified, then the 3D models of target scene are put into the region.
When the 3D models of target scene are implanted into game, may there is with target scene consistent or phase in game
As scene.Below by another embodiment, to describe the scene of game construction method of proposition of the embodiment of the present invention.
As shown in figure 4, the scene of game construction method comprises the following steps:
Step 401, to the target scene emitting structural light of reality.
In the present embodiment, grenade instrumentation can be set in the terminal, for reality target scene emitting structural light.When
When user to the target scene of reality by terminal-pair, the grenade instrumentation set in terminal can be to the target scene emitter junction of reality
Structure light.
Step 402, gather reflected light of the structure light on target scene and form the depth information of target scene.
In the present embodiment, reflected light of the structure light on target scene can be gathered by the camera at the terminal back side.When to
After the structure light of actual target scene transmitting reaches actual target scene, because the target scene of reality can be to structure light
Cause to hinder, structure light can reflect on the target scene of reality.At this point it is possible to pass through the camera pair set in terminal
Reflected light of the structure light on the target scene of reality is acquired, and target scene can be obtained by the reflected light collected
Structure light image, and then structure light image is handled, the depth information of target scene can be obtained.
Step 403, the 3D models of target scene are built according to depth information.
In the present embodiment, the color information for the actual target scene that can gather depth information with camera is melted
Close, obtain the 3D models of target scene.Specifically, actual target scene is extracted respectively from depth information and color information
Feature.Then, extract the feature of the actual target scene extracted from depth information and from color information actual
The feature of target scene carries out registration and Fusion Features processing, finally according to the feature after fusion, generates the 3D moulds of target scene
Type.
Step 404, the description information of 3D models is formed according to 3D models, and keyword is extracted from description information.
In the present embodiment, multiple objects 3D models and the corresponding relation of description information can be pre-established.Building target
After the 3D models of scene, the 3D models of one or more objects in target scene are can extract, by the 3D models of the object of extraction, with
Default object 3D models are contrasted, to form the description information of target scene 3D models according to comparative result, and then from retouching
State and keyword is extracted in information.
For example, the 3D models of 2 objects in target scene are extracted from the 3D models of the target scene of reality.Wherein, one
The 3D models of individual object are similar for the 3D models of desk to description information, and 3D models and the description information of another object are bed
3D models it is similar, may thereby determine that the target scene is a bedroom, and the description information of target scene is " bedroom field
Scape ".Finally, the extraction keyword " bedroom " from description information " a bedroom scene ".
Step 405, judge whether first scene consistent with target scene be present in game according to keyword.
In the present embodiment, each scene that can be in advance to be prestored in game establishes description information.In extraction target scene
After keyword in description information, the description information of each scene of the keyword with prestoring is contrasted.If the field to prestore
The keyword of target scene is not contained in the description information of scape, then can determine that target scene differs with the scene in game
Cause, that is to say, that first scene consistent with target scene is not present in game.If in the description information of certain scene to prestore
Keyword containing target scene, then it can determine first scene consistent with target scene be present in game.
Step 406, if it is judged that the first scene not be present for game, then the 3D models of target scene are implanted to trip
In play.
In the present embodiment, if the first scene is not present in game, user can choose target by clicking on the screen of terminal
The position of the 3D models implantation game of scene, corresponding position is implanted to by the 3D models of target scene.Terminal can be right in real time
The position selection operation of user is monitored, when the clicking operation of terminal monitoring to user, it may be determined that go out the clicking operation
Corresponding positional information.Then, the 3D models of target scene are placed on by the position that user clicked on according to positional information.
For example, constructing the 3D models in a bedroom, if the scene in room is not present in scene of game, at this moment user can
Some position on screen is clicked on, by corresponding position in the 3D models implantation game in bedroom.
Below by another embodiment, scene of game construction method proposed by the present invention is introduced.
As shown in figure 5, the scene of game construction method comprises the following steps:
Step 501, to the target scene emitting structural light of reality.
Step 502, gather reflected light of the structure light on target scene and form the depth information of target scene.
Step 503, the 3D models of target scene are built according to depth information.
Step 504, the description information of 3D models is formed according to 3D models, and keyword is extracted from description information.
Step 501-504 is similar with the step 401-404 in above-described embodiment, will not be repeated here.
Step 505, the keyword for obtaining each scene to be prestored in game forms set of keywords.
In the present embodiment, each scene that can be in advance to be prestored in game establishes description information, and is carried from description information
Corresponding keyword is taken, so as to obtain the keyword of each scene to be prestored in game.Afterwards, by the crucial font of each scene
Into set of keywords.Certainly, also each scene directly can be described with keyword.
For example, the scene to be prestored in game has street, room, park, shop, keyword corresponding to each scene, which is formed, to close
Key word set { street, room, park, shop }.
Step 506, it whether there is the keyword consistent or similar with the keyword of target scene in set of keywords.
In the present embodiment, the keyword of target scene is compared with the keyword in set of keywords, to judge to close
It whether there is the keyword consistent or similar with the keyword of target scene in key word set.If it is present perform step
507;If it does not exist, then perform step 508.
For example, set of keywords is combined into { street, room, park, shop }, the keyword of target scene is bedroom.First, may be used
Bedroom and the street in set of keywords are contrasted, judged result differs for both keyword.Then, by bedroom with closing
Key word room is contrasted, and two keywords are similar, at this moment, it may be determined that the keyword with target scene in set of keywords be present
Similar keyword.
Judge the ground degree of accuracy to improve, can calculate between the keyword in the keyword and set of keywords of target scene
Similarity.If similarity exceedes default threshold value, it may be determined that the pass in the keyword and set of keywords of target scene
Key word is consistent.
Step 507, the 3D models of target scene are implanted in game according to the position where the first scene and replace first
Scene.
If the keyword consistent or similar with the keyword of target scene in set of keywords be present, game is judged
It is middle the first scene to be present.At this moment, the first scene is found in gaming, and the first scene is clicked on screen.Terminal monitoring is to point
After hitting operation, the positional information where the first scene is determined, it is in place that the 3D models of target scene then are put into the first scene institute
Put, be implanted in game and replace the first scene.
For example, certain user wants the 3D models with oneself bedroom, some room in scene of game is replaced.At this moment, Yong Huke
The room in game is clicked on screen, after terminal monitoring to clicking operation, is implanted to the 3D models in bedroom in click location
In game, and the room in game is replaced, so as to realize the replacement of scene of game.
The game construction method of the present embodiment, it is possible to achieve by the scene in reality, replace similar or identical in game
Scene, make the scene in game more diversified, meet the individual demand of user.
Step 508, the 3D models of target scene are implanted in game.
If the keyword consistent or similar with target scene is not present in set of keywords, it can be determined that in fixed game
In the absence of the first scene.User can click on some position on screen, and the 3D models of target scene are implanted to game by terminal
In.
Herein it should be noted that as a kind of example, the structure light used in above-described embodiment can be to be heterogeneous
Structure light, the speckle pattern or random dot pattern that structure light heterogeneous is formed for the set of multiple hot spots.
Fig. 6 is the projection set schematic diagram of structure light heterogeneous in the embodiment of the present invention.As shown in fig. 6, the present invention is real
Apply using structure light heterogeneous in example, wherein, structure light heterogeneous is random alignment speckle pattern heterogeneous,
That is the structure light heterogeneous is the set of multiple hot spots, and arranged between multiple hot spots using uneven dispersing mode
Cloth, and then form a speckle pattern.Because the memory space shared by speckle pattern is smaller, thus, when grenade instrumentation is run not
The operational efficiency of terminal can be influenced too much, the memory space of terminal can be saved.
In addition, the speckle pattern used in the embodiment of the present invention, for other existing structure light types, hash
Arrangement can reduce energy expenditure, save electricity, improve the endurance of terminal.
In embodiments of the present invention, grenade instrumentation and shooting can be set in the terminals such as computer, mobile phone, palm PC
Head.It is speckle pattern that grenade instrumentation launches structure light heterogeneous to the target scene of reality.Specifically, projection dress can be utilized
Diffraction optical element in putting forms speckle pattern, wherein, a number of embossment is provided with the diffraction optical element, is not advised
Speckle pattern then is just produced by irregular embossment on diffraction optical element.In the embodiment of the present invention, embossment depth of groove and
Quantity can be set by algorithm.
Wherein, grenade instrumentation can be used for projecting a default speckle pattern to the space residing for measurand.Shooting
Head can be used for being acquired the measurand for having projected speckle pattern, to obtain the knot of the measurand with speckle pattern
Structure light image.
In the embodiment of the present invention, when the actual target scene of the camera alignment of terminal, the grenade instrumentation in terminal can
To project default speckle pattern to the space of the target scene of reality, there are multiple speckle points in the speckle pattern, when this dissipates
When spot pattern is projected onto on the surface of target scene, a lot of speckle points in the speckle pattern can be due to object in target scene
The reason for and shift.Reflected light is acquired by the camera of terminal, obtains the structure light figure with speckle pattern
Picture.
Further, by the speckle image of the actual target scene collected and reference speckle image according to pre-defined algorithm
Carry out view data calculating, obtain target scene speckle image each speckle point relative to reference to speckle point movement away from
From.Finally according to the displacement, the distance with reference to camera in speckle image and terminal and grenade instrumentation and camera it
Between relative spacing value, the depth value of each speckle point of speckle image is obtained using trigonometry, that is, obtains the depth of target scene
Information is spent, and then the 3D models of target scene are obtained according to depth information.
The scene of game construction method of the embodiment of the present invention, the target field for attempting to build in gaming is carried by obtaining
The depth information of scape, wherein, depth information generates after carrying out structure light image processing to actual scene, according to depth information structure
The 3D models of target scene are built, and 3D models are implanted in game.In the present embodiment, according to the depth information structure of actual scene
3D models are built, and the 3D models of structure are implanted in game, due to the 3D models of actual scene, than in game making software
Object model is more nearly reality, therefore the 3D models of actual scene are implanted into game, improves the substitution sense of scene of game, solves
Determine existing by software building scene of game, existing scene of game, which substitutes into, feels the problem of poor.
The embodiment of the present invention also provides a kind of scene of game construction device.
As shown in fig. 7, the scene of game construction device includes:Acquisition module 710, implant module 720.
Acquisition module 710 is used to obtain the depth information for carrying the target scene for attempting to build in gaming;Wherein, it is deep
Degree information generates after carrying out structure light image processing to actual scene.
Implant module 720 is used for the 3D models that target scene is built according to depth information, and 3D models are implanted into game
In.
In a kind of possible implementation of the present embodiment, as shown in figure 8, implant module 720 includes:Extraction unit 721,
Judging unit 722, implantation unit 723.
Wherein, extraction unit 721 is used for the description information that 3D models are formed according to 3D models, and is extracted from description information
Keyword.
Judging unit 722 is used to judge whether there is first consistent with target scene in game according to keyword
Scape.
It is that game does not have the first scene that implantation unit 723, which is used to work as judged result, and the 3D models of target scene are implanted into
Into game.
In a kind of possible implementation of the present embodiment, judging unit 722 is additionally operable to:
The keyword for obtaining each scene to be prestored in game forms set of keywords;
Search whether the keyword consistent or similar with the keyword of target scene be present in set of keywords;
If the keyword consistent or similar with the keyword of target scene is not present in set of keywords, trip is judged
The first scene is not present in play.
In a kind of possible implementation of the present embodiment, implantation unit 723 is additionally operable to:
If the keyword consistent or similar with the keyword of target scene in set of keywords be present, game is judged
It is middle the first scene to be present;
By the 3D models according to the position where the first scene, it is implanted in game and replaces the first scene.
In a kind of possible implementation of the present embodiment, implant module 720 is additionally operable to:
Position selection operation to user is monitored;
According to the position selection operation of monitoring, the positional information of 3D models in gaming is determined;
3D models are implanted in game according to positional information.
In a kind of possible implementation of the present embodiment, scene of game construction device, in addition to:
Transmitter module, for reality target scene emitting structural light;
Module is formed, for gathering reflected light of the structure light on target scene and forming the depth information of target scene.
In a kind of possible implementation of the present embodiment, structure light is structure light heterogeneous, structure light heterogeneous
The speckle pattern formed for the set of multiple hot spots or random dot pattern, are the diffraction lights by being arranged in the grenade instrumentation in terminal
Learn what element was formed, wherein, a number of embossment is provided with diffraction optical element, the depth of groove of embossment is different.
The division of modules is only used for for example, in other embodiments in above-mentioned scene of game construction device, can
Scene of game construction device is divided into different modules as required, with complete the whole of above-mentioned scene of game construction device or
Partial function.
It should be noted that the foregoing explanation to scene of game construction method embodiment, is also applied for the embodiment
Scene of game construction device, will not be repeated here.
The scene of game construction device of the embodiment of the present invention, the target field for attempting to build in gaming is carried by obtaining
The depth information of scape, wherein, depth information generates after carrying out structure light image processing to actual scene, according to depth information structure
The 3D models of target scene are built, and 3D models are implanted in game.In the present embodiment, according to the depth information structure of actual scene
3D models are built, and the 3D models of structure are implanted in game, due to the 3D models of actual scene, than in game making software
Object model is more nearly reality, therefore the 3D models of actual scene are implanted into game, improves the substitution sense of scene of game, solves
Determine existing by software building scene of game, existing scene of game, which substitutes into, feels the problem of poor.
The embodiment of the present invention also provides a kind of terminal device.Above-mentioned terminal device includes image processing circuit, at image
Managing circuit can utilize hardware and/or component software to realize, it may include define ISP (Image Signal Processing, figure
As signal transacting) the various processing units of pipeline.Fig. 9 is the schematic diagram of image processing circuit in one embodiment.Such as Fig. 9 institutes
Show, for purposes of illustration only, only showing the various aspects of the image processing techniques related to the embodiment of the present invention.
As shown in figure 9, image processing circuit 900 includes imaging device 910, ISP processors 930 and control logic device 940.
Imaging device 910 may include the camera and structured light projector with one or more lens 912, imaging sensor 914
916.Structured light projector 916 is by structured light projection to measured object.Wherein, the structured light patterns can be laser stripe, Gray code,
Sine streak or, speckle pattern of random alignment etc..Imaging sensor 914 catches the structure light that projection is formed to measured object
Image, and structure light image is sent to ISP processors 930, acquisition is demodulated to structure light image by ISP processors 930
The depth information of measured object.Meanwhile imaging sensor 914 can also catch the color information of measured object.It is of course also possible to by two
Individual imaging sensor 914 catches the structure light image and color information of measured object respectively.
Wherein, by taking pattern light as an example, ISP processors 930 are demodulated to structure light image, are specifically included, from this
The speckle image of measured object is gathered in structure light image, by the speckle image of measured object with reference speckle image according to pre-defined algorithm
View data calculating is carried out, each speckle point for obtaining speckle image on measured object dissipates relative to reference to the reference in speckle image
The displacement of spot.The depth value of each speckle point of speckle image is calculated using trigonometry conversion, and according to the depth
Angle value obtains the depth information of measured object.
It is, of course, also possible to obtain the depth image by the method for binocular vision or based on jet lag TOF method
Information etc., is not limited herein, as long as can obtain or belong to this by the method for the depth information that measured object is calculated
The scope that embodiment includes.
After the color information that ISP processors 930 receive the measured object that imaging sensor 914 captures, it can be tested
View data corresponding to the color information of thing is handled.ISP processors 930 are analyzed view data can be used for obtaining
It is determined that and/or imaging device 910 one or more control parameters image statistics.Imaging sensor 914 may include color
Color filter array (such as Bayer filters), imaging sensor 914 can obtain to be caught with each imaging pixel of imaging sensor 914
Luminous intensity and wavelength information, and provide one group of raw image data being handled by ISP processors 930.
ISP processors 930 handle raw image data pixel by pixel in various formats.For example, each image pixel can
Bit depth with 8,10,12 or 14 bits, ISP processors 930 can be carried out at one or more images to raw image data
Reason operation, image statistics of the collection on view data.Wherein, image processing operations can be by identical or different bit depth
Precision is carried out.
ISP processors 930 can also receive pixel data from video memory 920.Video memory 920 can be memory device
The independent private memory in a part, storage device or electronic equipment put, and may include DMA (Direct Memory
Access, direct direct memory access (DMA)) feature.
When receiving raw image data, ISP processors 930 can carry out one or more image processing operations.
After ISP processors 930 get color information and the depth information of measured object, it can be merged, obtained
3-D view.Wherein, can be extracted by least one of appearance profile extracting method or contour feature extracting method corresponding
The feature of measured object.Such as pass through active shape model method ASM, active appearance models method AAM, PCA PCA, discrete
The methods of cosine transform method DCT, the feature of measured object is extracted, is not limited herein.It will be extracted respectively from depth information again
The feature of measured object and feature progress registration and the Fusion Features processing that measured object is extracted from color information.Herein refer to
Fusion treatment can be the feature that will be extracted in depth information and color information directly combination or by different images
Middle identical feature combines after carrying out weight setting, it is possibility to have other amalgamation modes, finally according to the feature after fusion, generation
3-D view.
The view data of 3-D view can be transmitted to video memory 920, to carry out other place before shown
Reason.ISP processors 930 from the reception processing data of video memory 920, and to the processing data carry out original domain in and
Image real time transfer in RGB and YCbCr color spaces.The view data of 3-D view may be output to display 960, for
Family is watched and/or further handled by graphics engine or GPU (Graphics Processing Unit, graphics processor).This
Outside, the output of ISP processors 930 also be can be transmitted to video memory 920, and display 960 can be read from video memory 920
View data.In one embodiment, video memory 920 can be configured as realizing one or more frame buffers.In addition,
The output of ISP processors 930 can be transmitted to encoder/decoder 950, so as to encoding/decoding image data.The picture number of coding
According to can be saved, and decompressed before being shown in the equipment of display 960.Encoder/decoder 950 can by CPU or GPU or
Coprocessor is realized.
The image statistics that ISP processors 930 determine, which can be transmitted, gives the unit of control logic device 940.Control logic device 940
It may include the processor and/or microcontroller for performing one or more routines (such as firmware), one or more routines can be according to connecing
The image statistics of receipts, determine the control parameter of imaging device 910.
It it is below the step of realizing scene of game construction method with image processing techniques in Fig. 9:
Obtain the depth information for carrying the target scene for attempting to build in gaming;Wherein, depth information is to actual field
Generated after scape progress structure light image processing;
The 3D models of target scene are built according to depth information, and 3D models are implanted in game.
The embodiment of the present invention additionally provides a kind of computer-readable recording medium.One or more can perform comprising computer
The non-volatile computer readable storage medium storing program for executing of instruction, when the computer executable instructions are executed by one or more processors
When so that the computing device following steps:
Obtain the depth information for carrying the target scene for attempting to build in gaming;Wherein, depth information is to actual field
Generated after scape progress structure light image processing;
The 3D models of target scene are built according to depth information, and 3D models are implanted in game.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with
The hardware of correlation is instructed to complete by computer program, described program can be stored in a non-volatile computer and can be read
In storage medium, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage is situated between
Matter can be magnetic disc, CD, read-only memory (Read-Only Memory, ROM) etc..
Embodiment described above only expresses the several embodiments of the present invention, and its description is more specific and detailed, but simultaneously
Therefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the art
For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention
Protect scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance
Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the invention, " multiple " are meant that at least two, such as two, three
It is individual etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include
Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize custom logic function or process
Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable
Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction
The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass
Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment
Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring
Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium, which can even is that, to print the paper of described program thereon or other are suitable
Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage
Or firmware is realized.Such as, if realized with hardware with another embodiment, following skill well known in the art can be used
Any one of art or their combination are realized:With the logic gates for realizing logic function to data-signal from
Logic circuit is dissipated, the application specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile
Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries
Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium
In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also
That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould
Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above
Embodiments of the invention are stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention
System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of the invention
Type.
Claims (10)
- A kind of 1. scene of game construction method, it is characterised in that including:Obtain the depth information for carrying the target scene for attempting to build in gaming;Wherein, the depth information is to actual field Generated after scape progress structure light image processing;The 3D models of the target scene are built according to the depth information, and the 3D models are implanted in the game.
- 2. according to the method for claim 1, it is characterised in that described that the 3D models are implanted in the game, bag Include:The description information of the 3D models is formed according to the 3D models, and keyword is extracted from the description information;Judge whether first scene consistent with the target scene be present in the game according to the keyword;If it is judged that first scene be present for the game, then the 3D models of the target scene are implanted into Into the game.
- 3. according to the method for claim 2, it is characterised in that it is described according to the keyword judge in the game whether First scene consistent with the target scene be present, including:The keyword for obtaining each scene to be prestored in the game forms set of keywords;Search whether the pass consistent or similar with the keyword of the target scene be present in the set of keywords Key word;If the keyword consistent or similar with the keyword of the target scene is not present in the set of keywords, Then judge first scene is not present in the game.
- 4. according to the method for claim 3, it is characterised in that also include:If the keyword consistent or similar with the keyword of the target scene in the set of keywords be present, Judge first scene in the game be present;By the 3D models according to the position where first scene, it is implanted in the game and replaces first scene.
- 5. according to the method for claim 1, it is characterised in that described that the 3D models are implanted in the game, bag Include:Position selection operation to user is monitored;According to the position selection operation of monitoring, positional information of the 3D models in the game is determined;The 3D models are implanted in the game according to the positional information.
- 6. according to the method described in claim any one of 1-4, it is characterised in that the acquisition, which carries, attempts structure in gaming Before the depth information for the target scene built, including:To the target scene emitting structural light of reality;Gather reflected light of the structure light on the target scene and form the depth information of target scene.
- 7. according to the method for claim 6, it is characterised in that the structure light is structure light heterogeneous, described non-equal The speckle pattern or random dot pattern that even structure light is formed for the set of multiple hot spots, are the grenade instrumentations by being arranged in terminal In diffraction optical element formed, wherein, a number of embossment is provided with the diffraction optical element, the embossment Depth of groove is different.
- A kind of 8. scene of game construction device, it is characterised in that including:Acquisition module, the depth information for the target scene for attempting to build in gaming is carried for obtaining;Wherein, the depth Information generates after carrying out structure light image processing to actual scene;Implant module, it is implanted into for building the 3D models of the target scene according to the depth information, and by the 3D models Into the game.
- 9. a kind of terminal device, including memory and processor, computer-readable instruction, the finger are stored in the memory When order is by the computing device so that scene of game structure of the computing device as any one of claim 1-7 Construction method.
- 10. one or more includes the non-volatile computer readable storage medium storing program for executing of computer executable instructions, when the calculating When machine executable instruction is executed by one or more processors so that the computing device such as any one of claim 1-7 Described scene of game construction method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710677570.8A CN107643890B (en) | 2017-08-09 | 2017-08-09 | Game scene construction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710677570.8A CN107643890B (en) | 2017-08-09 | 2017-08-09 | Game scene construction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107643890A true CN107643890A (en) | 2018-01-30 |
CN107643890B CN107643890B (en) | 2021-03-05 |
Family
ID=61110642
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710677570.8A Active CN107643890B (en) | 2017-08-09 | 2017-08-09 | Game scene construction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107643890B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109646950A (en) * | 2018-11-20 | 2019-04-19 | 苏州紫焰网络科技有限公司 | One kind being applied to image processing method, device and terminal in scene of game |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012096747A1 (en) * | 2011-01-11 | 2012-07-19 | Eastman Kodak Company | Forming range maps using periodic illumination patterns |
CN103533449A (en) * | 2012-12-20 | 2014-01-22 | Tcl集团股份有限公司 | Method and system for realizing three-dimensional fitting based on intelligent three-dimensional television |
US20140161173A1 (en) * | 2012-12-11 | 2014-06-12 | Nvidia Corporation | System and method for controlling video encoding using content information |
CN104299143A (en) * | 2014-10-20 | 2015-01-21 | 上海电机学院 | Virtual try-in method and device |
CN104793784A (en) * | 2015-03-23 | 2015-07-22 | 中国科学技术大学先进技术研究院 | Simulation touch operation system and operation method based on depth data |
CN106323190A (en) * | 2016-09-26 | 2017-01-11 | 深圳奥比中光科技有限公司 | Depth measurement range-customizable depth measurement method and system for obtaining depth image |
CN106355637A (en) * | 2016-08-30 | 2017-01-25 | 北京像素软件科技股份有限公司 | Game scene environment rendering method |
CN106371607A (en) * | 2016-09-19 | 2017-02-01 | 深圳奥比中光科技有限公司 | Man-machine interaction method and system based on cooperative game |
CN106540447A (en) * | 2016-11-04 | 2017-03-29 | 宇龙计算机通信科技(深圳)有限公司 | VR scenario building method and system, VR game building methods and system, VR equipment |
-
2017
- 2017-08-09 CN CN201710677570.8A patent/CN107643890B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012096747A1 (en) * | 2011-01-11 | 2012-07-19 | Eastman Kodak Company | Forming range maps using periodic illumination patterns |
US20140161173A1 (en) * | 2012-12-11 | 2014-06-12 | Nvidia Corporation | System and method for controlling video encoding using content information |
CN103533449A (en) * | 2012-12-20 | 2014-01-22 | Tcl集团股份有限公司 | Method and system for realizing three-dimensional fitting based on intelligent three-dimensional television |
CN104299143A (en) * | 2014-10-20 | 2015-01-21 | 上海电机学院 | Virtual try-in method and device |
CN104793784A (en) * | 2015-03-23 | 2015-07-22 | 中国科学技术大学先进技术研究院 | Simulation touch operation system and operation method based on depth data |
CN106355637A (en) * | 2016-08-30 | 2017-01-25 | 北京像素软件科技股份有限公司 | Game scene environment rendering method |
CN106371607A (en) * | 2016-09-19 | 2017-02-01 | 深圳奥比中光科技有限公司 | Man-machine interaction method and system based on cooperative game |
CN106323190A (en) * | 2016-09-26 | 2017-01-11 | 深圳奥比中光科技有限公司 | Depth measurement range-customizable depth measurement method and system for obtaining depth image |
CN106540447A (en) * | 2016-11-04 | 2017-03-29 | 宇龙计算机通信科技(深圳)有限公司 | VR scenario building method and system, VR game building methods and system, VR equipment |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109646950A (en) * | 2018-11-20 | 2019-04-19 | 苏州紫焰网络科技有限公司 | One kind being applied to image processing method, device and terminal in scene of game |
Also Published As
Publication number | Publication date |
---|---|
CN107643890B (en) | 2021-03-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107481304A (en) | The method and its device of virtual image are built in scene of game | |
CN107465906B (en) | Panorama shooting method, device and the terminal device of scene | |
CN107483845B (en) | Photographic method and its device | |
CN107481317A (en) | The facial method of adjustment and its device of face 3D models | |
CN107423716A (en) | Face method for monitoring state and device | |
CN107480613A (en) | Face identification method, device, mobile terminal and computer-readable recording medium | |
CN107551549A (en) | Video game image method of adjustment and its device | |
CN107610077A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107507269A (en) | Personalized three-dimensional model generating method, device and terminal device | |
CN107452034A (en) | Image processing method and its device | |
CN107657652A (en) | Image processing method and device | |
CN107481101A (en) | Wear the clothes recommendation method and its device | |
CN107610171A (en) | Image processing method and its device | |
CN107463659A (en) | Object search method and its device | |
CN107509045A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107707831A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107656611A (en) | Somatic sensation television game implementation method and device, terminal device | |
CN107491744A (en) | Human body personal identification method, device, mobile terminal and storage medium | |
CN107707838A (en) | Image processing method and device | |
CN107469355A (en) | Game image creation method and device, terminal device | |
CN107644440A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107590793A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107343148A (en) | Image completion method, apparatus and terminal | |
CN107437268A (en) | Photographic method, device, mobile terminal and computer-readable storage medium | |
CN107493452A (en) | Video pictures processing method, device and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |