CN107657632A - Scene display methods and device, terminal device - Google Patents

Scene display methods and device, terminal device Download PDF

Info

Publication number
CN107657632A
CN107657632A CN201710677492.1A CN201710677492A CN107657632A CN 107657632 A CN107657632 A CN 107657632A CN 201710677492 A CN201710677492 A CN 201710677492A CN 107657632 A CN107657632 A CN 107657632A
Authority
CN
China
Prior art keywords
scene
attribute
models
point
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710677492.1A
Other languages
Chinese (zh)
Inventor
周意保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710677492.1A priority Critical patent/CN107657632A/en
Publication of CN107657632A publication Critical patent/CN107657632A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The present invention proposes that a kind of scene display methods and device, terminal device, wherein method include:Projected by using structure light device to scene to be shown, obtain the 3D models of each object in Each point in time scene to be shown;For each object in scene to be shown, the 3D models of Each point in time object are compared, determine the attribute of object;The attribute of object includes:Rigid body or non-rigid;Display type of the object in scene to be shown is determined according to the attribute of object;Scene to be shown is shown according to the 3D models of each object in scene to be shown and display type, so as to weaken the non-rigid in scene to be shown, strengthen the rigid body in scene to be shown, allow users to be one can easily see the rigid body with display scene, improve the display effect of rigid body, shooting efficiency is improved, improves the shooting experience of user.

Description

Scene display methods and device, terminal device
Technical field
The present invention relates to field of terminal equipment, more particularly to a kind of scene display methods and device, terminal device.
Background technology
At present, with the development of mobile communication technology, mobile terminal has become indispensable in people's daily life Communication equipment, and the development of mobile terminal technique for taking so that mobile terminal is frequently used to obtain the picture of some scenes, Such as building in temple, museum, park etc. etc..Because the number at sight spot is typically all relatively more, user uses mobile terminal When obtaining the picture of scene, the picture of a large amount of people or removable vehicle can be photographed, causes user to be difficult to photograph high quality Scene picture, reduce shooting efficiency, influence user shooting experience.
The content of the invention
It is contemplated that at least solves one of technical problem in correlation technique to a certain extent.
Therefore, first purpose of the present invention is to propose a kind of scene display methods, to realize that terminal device is effective The picture of some scenes is got, and solves to be difficult to the scene picture for photographing high quality in the prior art, shooting efficiency is poor The problem of.
Second object of the present invention is to propose a kind of scene display device.
Third object of the present invention is to propose a kind of terminal device.
Fourth object of the present invention is to propose a kind of non-transitorycomputer readable storage medium.
For the above-mentioned purpose, first aspect present invention embodiment proposes a kind of scene display methods, including:
Projected, obtained each in scene to be shown described in Each point in time to scene to be shown using structure light device The 3D models of object;
For each object in the scene to be shown, the 3D models of object described in Each point in time are compared, Determine the attribute of the object;The attribute of the object includes:Rigid body or non-rigid;
Display type of the object in the scene to be shown is determined according to the attribute of the object;
The scene to be shown is shown according to the 3D models of each object and display type in the scene to be shown.
As a kind of possible implementation of first aspect present invention embodiment, it is described using structure light device to waiting to show Show that scene is projected, obtain the 3D models of each object in scene to be shown described in Each point in time, including:
For Each point in time, projected using structure light device to scene to be shown;
The each object in the scene to be shown is imaged using video camera, obtained in the scene to be shown The depth image of each object;
With reference to the position relationship between the depth image of each object, and the structure light device and the video camera, Calculate the 3D models for obtaining each object.
As a kind of possible implementation of first aspect present invention embodiment, structure caused by the structure light device Light is heterogeneous texture light.
It is described to be directed in the scene to be shown as a kind of possible implementation of first aspect present invention embodiment Each object, the 3D models of object described in Each point in time are compared, determine the attribute of the object, including:
For each object in the scene to be shown, the 3D models of object described in Each point in time are analyzed, Extract the characteristic point information of the object;
The characteristic point information of object described in Each point in time is compared, determines the deformation extent of the object;
The attribute of the object is determined according to the deformation extent of the object.
As a kind of possible implementation of first aspect present invention embodiment, the deformation journey according to the object Degree determines the attribute of the object, including:
If the deformation extent of the object is less than predetermined threshold value, it is determined that the attribute of the object is rigid body;
If the deformation extent of the object is more than or equal to predetermined threshold value, it is determined that the attribute of the object is non-rigid.
It is true as a kind of possible implementation of first aspect present invention embodiment, the attribute according to the object Fixed display type of the object in the scene to be shown, including:
If the attribute of the object is rigid body, it is determined that display type of the object in the scene to be shown is strong Change display;
If the attribute of the object is non-rigid, it is determined that display type of the object in the scene to be shown be Reduction display.
The scene display methods of the embodiment of the present invention, projected, obtained to scene to be shown by using structure light device Take the 3D models of each object in Each point in time scene to be shown;For each object in scene to be shown, to it is each when Between the 3D models of point object be compared, determine the attribute of object;The attribute of object includes:Rigid body or non-rigid;According to right The attribute of elephant determines display type of the object in scene to be shown;According to the 3D models of each object in scene to be shown and Display type shows scene to be shown, so as to weaken the non-rigid in scene to be shown, strengthens firm in scene to be shown Body so that user is able to easily see the display effect for the rigid body in display scene, improving rigid body, improves shooting effect Rate, improve the shooting experience of user.
For the above-mentioned purpose, second aspect of the present invention embodiment proposes a kind of scene display device, including:
Projection module, for being projected using structure light device to scene to be shown, obtain and treated described in Each point in time Show the 3D models of each object in scene;
Comparing module, for for each object in the scene to be shown, to the 3D of object described in Each point in time Model is compared, and determines the attribute of the object;The attribute of the object includes:Rigid body or non-rigid;
Determining module, for determining display class of the object in the scene to be shown according to the attribute of the object Type;
Display module, for according to being shown the 3D models of each object and display type in the scene to be shown Scene to be shown.
As a kind of possible implementation of second aspect of the present invention embodiment, the projection module includes:
Projecting cell, for for Each point in time, being projected using structure light device to scene to be shown;
Image unit, for being imaged using video camera to each object in the scene to be shown, described in acquisition The depth image of each object in scene to be shown;
Computing unit, for combining the depth image of each object, and the structure light device and the video camera it Between position relationship, calculate obtain each object 3D models.
As a kind of possible implementation of second aspect of the present invention embodiment, structure caused by the structure light device Light is heterogeneous texture light.
As a kind of possible implementation of second aspect of the present invention embodiment, the comparing module includes:
Extraction unit, for for each object in the scene to be shown, to the 3D of object described in Each point in time Model is analyzed, and extracts the characteristic point information of the object;
Comparing unit, for the characteristic point information of object described in Each point in time to be compared, determine the object Deformation extent;
Determining unit, for determining the attribute of the object according to the deformation extent of the object.
As a kind of possible implementation of second aspect of the present invention embodiment, the determining unit is specifically used for,
When the deformation extent of the object is less than predetermined threshold value, the attribute for determining the object is rigid body;
When the deformation extent of the object is more than or equal to predetermined threshold value, the attribute for determining the object is non-rigid.
As a kind of possible implementation of second aspect of the present invention embodiment, the determining module includes:
First determining unit, for when the attribute of the object is rigid body, determining the object in the field to be shown Display type in scape shows to strengthen;
Second determining unit, for when the attribute of the object is non-rigid, determining the object described to be shown Display type in scene shows for reduction.
The scene display device of the embodiment of the present invention, projected, obtained to scene to be shown by using structure light device Take the 3D models of each object in Each point in time scene to be shown;For each object in scene to be shown, to it is each when Between the 3D models of point object be compared, determine the attribute of object;The attribute of object includes:Rigid body or non-rigid;According to right The attribute of elephant determines display type of the object in scene to be shown;According to the 3D models of each object in scene to be shown and Display type shows scene to be shown, so as to weaken the non-rigid in scene to be shown, strengthens firm in scene to be shown Body so that user is able to easily see the display effect for the rigid body in display scene, improving rigid body, improves shooting effect Rate, improve the shooting experience of user.
For the above-mentioned purpose, third aspect present invention embodiment proposes a kind of terminal device, including:
Housing and processor and memory in the housing, wherein, the processor is by reading the storage The executable program code stored in device runs program corresponding with the executable program code, for realizing such as this hair Scene display methods described in bright first aspect embodiment.
For the above-mentioned purpose, fourth aspect present invention embodiment proposes a kind of non-transitory computer-readable storage medium Matter, computer program is stored thereon with, is realized when computer program is executed by processor as described in first aspect embodiment Scene display methods.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is a kind of schematic flow sheet of scene display methods provided in an embodiment of the present invention;
Fig. 2 is the schematic diagram of various forms of structured light projections provided in an embodiment of the present invention;
Fig. 3 is the schematic flow sheet of another scene display methods provided in an embodiment of the present invention;
Fig. 4 is a kind of structural representation of scene display device provided in an embodiment of the present invention;
Fig. 5 is the structural representation of another scene display device provided in an embodiment of the present invention;
Fig. 6 is the structural representation of another scene display device provided in an embodiment of the present invention;
Fig. 7 is the structural representation of another scene display device provided in an embodiment of the present invention;
Fig. 8 is a kind of structural representation of terminal device provided in an embodiment of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings the scene display methods and device, terminal device of the embodiment of the present invention are described.
It can realize that terminal device effectively gets the picture of some scenes, and solve to be difficult to shoot in the prior art To the scene picture of high quality, the problem of shooting efficiency difference.
Fig. 1 is the schematic flow sheet of scene display methods provided in an embodiment of the present invention.
As shown in figure 1, the scene display methods comprises the following steps:
S101, projected using structure light device, obtained each in Each point in time scene to be shown to scene to be shown The 3D models of individual object.
The executive agent for the scene display methods that the present embodiment provides is scene display device, and scene display device specifically may be used Think installation hardware on the terminal device either software or hardware on the background server being connected with terminal device or Software.The terminal device can be smart mobile phone, tablet personal computer, ipad etc..
In the present embodiment, structure light refers to the set of the projection ray in known spatial direction.Structure light device refers to Generating structure light, can be by the equipment on structured light projection to measurand.The pattern of structure light can include:Structure light mould Formula, line-structured light pattern, multiple line structure optical mode, area-structure light pattern and phase method pattern.Structure light pattern refers to tying The light beam that structure light device is sent projects one luminous point of generation in measurand;Lens imaging of the luminous point through video camera is in video camera Imaging plane on, formed a two-dimensional points.It is tested that line-structured light pattern refers to that the light beam that structure light device is sent projects A light is produced on object;For lens imaging of the light through video camera on the imaging plane of video camera, forming one may hair The light of raw distortion or fracture.Multiple line structure optical mode refers to that the light beam that structure light device is sent is projected in measurand Produce a plurality of light.Area-structure light pattern refers to that the light beam that structure light device is sent projects and smooth surface is produced in measurand. Wherein, the depth at each position is proportional in the distortion degree of light and measurand.Splitting in the breaking degree and measurand of light The correlations such as seam.
With reference to video camera imaging plane on luminous point, light or smooth surface, and the position of video camera and structure light device Relation is put, triangle geometrical-restriction relation can be obtained, and luminous point, light or smooth surface can be uniquely determined in the known world by it Locus in coordinate system, that is, determine the sky of each position, each characteristic point in the known world coordinate system in measurand Between position;And then the colouring information of camera acquisition is combined, the recovery to the three dimensions of measurand can be completed.Such as Fig. 2 Various forms of structured light projections that are shown, being formed for different structure light devices in measurand.
In addition, it is also necessary to illustrate, in the present embodiment, the pattern of structure light can also include:Pattern light Pattern, pattern optical mode refer to that the light beam that structure light device is sent projects and non-homogeneous luminous point battle array are produced in measurand Row.
In the present embodiment, specifically, be directed to Each point in time, scene display device can call structure light device to waiting to show Show that scene is projected;The each object in scene to be shown is imaged using video camera, obtained in scene to be shown The depth image of each object;Closed with reference to the position between the depth image of each object, and structure light device and video camera System, calculate the 3D models for obtaining each object.
In the present embodiment, scene display device can obtain the scope of scene to be shown in advance, according to scene to be shown The projection angle of scope adjustment structure light device and drop shadow spread, drop shadow spread is included by the scope of scene to be shown.
In the present embodiment, scene display device can call structure light device to carry out structured light projection to scene to be shown, And call video camera to shoot the structured light projection in drop shadow spread, to obtain the depth of each object in scene to be shown Image is spent, and then obtains the 3D models of each object.
In the present embodiment, the mode that scene display device obtains the depth image of each object in scene to be shown can be with For scene display device calls structure light device to be projected to scene to be shown, obtains the depth image of scene to be shown, right The depth image of scene to be shown is analyzed, and obtains the characteristic point information in scene to be shown, to the spy in scene to be shown Sign point information is analyzed, and determines regional extent of each object in scene to be shown, and then determine the depth map of each object Picture.
In the present embodiment, the time difference between Each point in time, Each point in time can be set as needed, example Such as set according to the deformation velocity of each object, translational speed in scene to be shown.
S102, for each object in scene to be shown, the 3D models of Each point in time object are compared, it is determined that The attribute of object;The attribute of object includes:Rigid body or non-rigid.
In the present embodiment, for each object in scene to be shown, scene display device can be to Each point in time institute The 3D models for stating object are compared, and determine the deformation extent of object;According to the deformation procedure of object determine object for rigid body or Person's non-rigid.Wherein, for the movable objects in scene to be shown, the movable objects are possibly present therein a time In the scene to be shown of point, and it is not present in the scene to be shown at another time point, then can should for such case The attribute of movable objects is defined as non-rigid, because the movable objects are shape, size or structure under external force The object to change.
S103, display type of the object in scene to be shown is determined according to the attribute of object.
In the present embodiment, if the attribute of object is rigid body, it is determined that display type of the object in scene to be shown is strong Change display;If the attribute of object is non-rigid, it is determined that display type of the object in scene to be shown shows for reduction.Its In, strengthen display can refer to highlight display, overstriking is shown, blacken show etc.;Reduction display can refer to dim degree display, shoal It has been shown that, dimmed display, the display that attenuates, do not show etc..Strengthening the means of display and reduction display can be set as needed Put, do not limit several above so that user substantially can observe or view the object for strengthening display.
S104, scene to be shown shown according to the 3D models and display type of each object in scene to be shown.
In the present embodiment, scene display device can be according to the 3D models and display class of each object in scene to be shown Type is shown to each object, so as to realize the display to scene to be shown.
The scene display methods of the embodiment of the present invention, projected, obtained to scene to be shown by using structure light device Take the 3D models of each object in Each point in time scene to be shown;For each object in scene to be shown, to it is each when Between the 3D models of point object be compared, determine the attribute of object;The attribute of object includes:Rigid body or non-rigid;According to right The attribute of elephant determines display type of the object in scene to be shown;According to the 3D models of each object in scene to be shown and Display type shows scene to be shown, so as to weaken the non-rigid in scene to be shown, strengthens firm in scene to be shown Body so that user is able to easily see the display effect for the rigid body in display scene, improving rigid body, improves shooting effect Rate, improve the shooting experience of user.
Fig. 3 is the schematic flow sheet of another scene display methods provided in an embodiment of the present invention.As shown in figure 3, in Fig. 1 On the basis of illustrated embodiment, step 102 can specifically include:
S1021, for each object in scene to be shown, the 3D models of Each point in time object are analyzed, carried Take the characteristic point information of object.
In the present embodiment, scene display device is directed to each object in scene to be shown, to the Each point in time object 3D models analyzed, the characteristic point information of the extraction Each point in time object.Due to different time points, the object may be deposited In deformation, therefore different time points, the characteristic point information of the object are possible different.Wherein, the spy mentioned in the present embodiment Sign point information is the three-dimensional feature point information obtained according to 3D model analysis.
S1022, the characteristic point information of Each point in time object is compared, determines the deformation extent of object.
In the present embodiment, the deformation extent of object causes the intensity of variation of the characteristic point information of object, thus will be each when Between the characteristic point information of point object be compared, it becomes possible to the deformation extent of object is determined, according to the deformation extent of object with regard to energy The attribute for enough determining object is rigid body or non-rigid.
S1023, the attribute for determining according to the deformation extent of object object.
In the present embodiment, specifically, scene display device can when the deformation extent of object is less than predetermined threshold value, it is determined that The attribute of object is rigid body;When the deformation extent of object is more than or equal to predetermined threshold value, the attribute for determining object is non-rigid.Example Building scene as being directed to some tourist attractions, obtain the 3D models of each object in the Each point in time building scene;The building Each object in scene includes building, people, tree etc.;Because people lies substantially in mobile or action situation, therefore will be each The characteristic point information of time point people is compared, and can determine that the deformation extent of people is very high, i.e., people is acting always, each time The coordinate for the characteristic point that point analysis obtains is changing always, therefore people can be defined as into non-rigid;And building, when each Between put and do not change, deformation extent is very low, therefore building can be defined as into rigid body, so as to weaken the display to people, strengthens Display to building, so that user is easily observed that the building in above-mentioned scene, and it is not readily observed people Deng improving the display effect of building, improve shooting efficiency.
The scene display methods of the embodiment of the present invention, projected, obtained to scene to be shown by using structure light device Take the 3D models of each object in Each point in time scene to be shown;For each object in scene to be shown, to it is each when Between the 3D models of point object analyzed, the characteristic point information of extracting object;The characteristic point information of Each point in time object is entered Row compares, and determines the deformation extent of object;The attribute of object is determined according to the deformation extent of object;The attribute of object includes:Just Body or non-rigid;Display type of the object in scene to be shown is determined according to the attribute of object;According in scene to be shown The 3D models and display type of each object show scene to be shown, so as to weaken the non-rigid in scene to be shown, Strengthen the rigid body in scene to be shown so that user is able to easily see with the rigid body in display scene, improves rigid body Display effect, shooting efficiency is improved, improve the shooting experience of user.
Fig. 4 is a kind of structural representation of scene display device provided in an embodiment of the present invention.As shown in figure 4, the scene Display device includes:Projection module 41, comparing module 42, determining module 43 and display module 44.
Wherein, projection module 41, for being projected using structure light device to scene to be shown, Each point in time is obtained The 3D models of each object in the scene to be shown;
Comparing module 42, for for each object in the scene to be shown, to object described in Each point in time 3D models are compared, and determine the attribute of the object;The attribute of the object includes:Rigid body or non-rigid;
Determining module 43, for determining display of the object in the scene to be shown according to the attribute of the object Type;
Display module 44, for showing institute according to the 3D models of each object in the scene to be shown and display type State scene to be shown.
The scene display device that the present embodiment provides is specifically as follows the hardware or software of installation on the terminal device, or The hardware or software on background server that person is connected with terminal device.The terminal device can be smart mobile phone, flat board electricity Brain, ipad etc..
In the present embodiment, for each object in scene to be shown, scene display device can be to Each point in time institute The 3D models for stating object are compared, and determine the deformation extent of object;According to the deformation procedure of object determine object for rigid body or Person's non-rigid.Wherein, for the movable objects in scene to be shown, the movable objects are possibly present therein a time In the scene to be shown of point, and it is not present in the scene to be shown at another time point, then can should for such case The attribute of movable objects is defined as non-rigid, because the movable objects are shape, size or structure under external force The object to change.
In the present embodiment, if the attribute of object is rigid body, it is determined that display type of the object in scene to be shown is strong Change display;If the attribute of object is non-rigid, it is determined that display type of the object in scene to be shown shows for reduction.Its In, strengthen display can refer to highlight display, overstriking is shown, blacken show etc.;Reduction display can refer to dim degree display, shoal It has been shown that, dimmed display, the display that attenuates, do not show etc..Strengthening the means of display and reduction display can be set as needed Put, do not limit several above so that user substantially can observe or view the object for strengthening display.
On Fig. 4 basis, Fig. 5 is the structural representation of another scene display device provided in an embodiment of the present invention. As shown in figure 5, the projection module 41 includes:Projecting cell 411, image unit 412 and computing unit 413.
Wherein, projecting cell 411, for for Each point in time, being thrown using structure light device to scene to be shown Shadow;
Image unit 412, for being imaged using video camera to each object in the scene to be shown, obtain institute State the depth image of each object in scene to be shown;
Computing unit 413, for combining the depth image of each object, and the structure light device and the video camera Between position relationship, calculate obtain each object 3D models.
In the present embodiment, scene display device can obtain the scope of scene to be shown in advance, according to scene to be shown The projection angle of scope adjustment structure light device and drop shadow spread, drop shadow spread is included by the scope of scene to be shown.
In the present embodiment, scene display device can call structure light device to carry out structured light projection to scene to be shown, And call video camera to shoot the structured light projection in drop shadow spread, to obtain the depth of each object in scene to be shown Image is spent, and then obtains the 3D models of each object.
In the present embodiment, the mode that scene display device obtains the depth image of each object in scene to be shown can be with For scene display device calls structure light device to be projected to scene to be shown, obtains the depth image of scene to be shown, right The depth image of scene to be shown is analyzed, and obtains the characteristic point information in scene to be shown, to the spy in scene to be shown Sign point information is analyzed, and determines regional extent of each object in scene to be shown, and then determine the depth map of each object Picture.
In the present embodiment, the time difference between Each point in time, Each point in time can be set as needed, example Such as set according to the deformation velocity of each object, translational speed in scene to be shown.
The scene display device of the embodiment of the present invention, projected, obtained to scene to be shown by using structure light device Take the 3D models of each object in Each point in time scene to be shown;For each object in scene to be shown, to it is each when Between the 3D models of point object be compared, determine the attribute of object;The attribute of object includes:Rigid body or non-rigid;According to right The attribute of elephant determines display type of the object in scene to be shown;According to the 3D models of each object in scene to be shown and Display type shows scene to be shown, so as to weaken the non-rigid in scene to be shown, strengthens firm in scene to be shown Body so that user is able to easily see the display effect for the rigid body in display scene, improving rigid body, improves shooting effect Rate, improve the shooting experience of user.
On Fig. 4 basis, Fig. 6 is the structural representation of another scene display device provided in an embodiment of the present invention. As shown in fig. 6, the comparing module 42 includes:Extraction unit 421, comparing unit 422 and determining unit 423.
Wherein, extraction unit 421, for for each object in the scene to be shown, to described in Each point in time The 3D models of object are analyzed, and extract the characteristic point information of the object;
Comparing unit 422, for the characteristic point information of object described in Each point in time to be compared, determine the object Deformation extent;
Determining unit 423, for determining the attribute of the object according to the deformation extent of the object.
Wherein, the determining unit 423 is specifically used for, when the deformation extent of the object is less than predetermined threshold value, it is determined that The attribute of the object is rigid body;When the deformation extent of the object is more than or equal to predetermined threshold value, the category of the object is determined Property is non-rigid.
In the present embodiment, scene display device is directed to each object in scene to be shown, to the Each point in time object 3D models analyzed, the characteristic point information of the extraction Each point in time object.Due to different time points, the object may be deposited In deformation, therefore different time points, the characteristic point information of the object are possible different.Wherein, the spy mentioned in the present embodiment Sign point information is the three-dimensional feature point information obtained according to 3D model analysis.
In the present embodiment, the deformation extent of object causes the intensity of variation of the characteristic point information of object, thus will be each when Between the characteristic point information of point object be compared, it becomes possible to the deformation extent of object is determined, according to the deformation extent of object with regard to energy The attribute for enough determining object is rigid body or non-rigid.
In the present embodiment, specifically, scene display device can when the deformation extent of object is less than predetermined threshold value, it is determined that The attribute of object is rigid body;When the deformation extent of object is more than or equal to predetermined threshold value, the attribute for determining object is non-rigid.Example Building scene as being directed to some tourist attractions, obtain the 3D models of each object in the Each point in time building scene;The building Each object in scene includes building, people, tree etc.;Because people lies substantially in mobile or action situation, therefore will be each The characteristic point information of time point people is compared, and can determine that the deformation extent of people is very high, i.e., people is acting always, each time The coordinate for the characteristic point that point analysis obtains is changing always, therefore people can be defined as into non-rigid;And building, when each Between put and do not change, deformation extent is very low, therefore building can be defined as into rigid body, so as to weaken the display to people, strengthens Display to building, so that user is easily observed that the building in above-mentioned scene, and it is not readily observed people Deng improving the display effect of building, improve shooting efficiency.
The scene display device of the embodiment of the present invention, projected, obtained to scene to be shown by using structure light device Take the 3D models of each object in Each point in time scene to be shown;For each object in scene to be shown, to it is each when Between the 3D models of point object analyzed, the characteristic point information of extracting object;The characteristic point information of Each point in time object is entered Row compares, and determines the deformation extent of object;The attribute of object is determined according to the deformation extent of object;The attribute of object includes:Just Body or non-rigid;Display type of the object in scene to be shown is determined according to the attribute of object;According in scene to be shown The 3D models and display type of each object show scene to be shown, so as to weaken the non-rigid in scene to be shown, Strengthen the rigid body in scene to be shown so that user is able to easily see with the rigid body in display scene, improves rigid body Display effect, shooting efficiency is improved, improve the shooting experience of user.
On Fig. 4 basis, Fig. 7 is the structural representation of another scene display device provided in an embodiment of the present invention. As shown in fig. 7, the determining module 43 includes:First determining unit 431 and the second determining unit 432.
Wherein, the first determining unit 431, for when the attribute of the object is rigid body, determining the object described Display type in scene to be shown shows to strengthen;
Second determining unit 432, for when the attribute of the object is non-rigid, determining that the object is waited to show described Show that the display type in scene shows for reduction.
The embodiment of the present invention also provides a kind of terminal device.Above-mentioned terminal device includes image processing circuit, at image Managing circuit can utilize hardware and/or component software to realize, it may include define ISP (Image Signal Processing, figure As signal transacting) the various processing units of pipeline.Fig. 8 is the schematic diagram of image processing circuit in one embodiment.Such as Fig. 8 institutes Show, for purposes of illustration only, only showing the various aspects of the image processing techniques related to the embodiment of the present invention.
As shown in figure 8, image processing circuit 900 includes imaging device 910, ISP processors 930 and control logic device 940. Imaging device 910 may include the camera and structured light projector with one or more lens 912, imaging sensor 914 916.Structured light projector 916 is by structured light projection to measured object.Wherein, the structured light patterns can be laser stripe, Gray code, Sine streak or, speckle pattern of random alignment etc..Imaging sensor 914 catches the structure light that projection is formed to measured object Image, and structure light image is sent to ISP processors 930, acquisition is demodulated to structure light image by ISP processors 930 The depth information of measured object.Meanwhile imaging sensor 914 can also catch the color information of measured object.It is of course also possible to by two Individual imaging sensor 914 catches the structure light image and color information of measured object respectively.
Wherein, by taking pattern light as an example, ISP processors 930 are demodulated to structure light image, are specifically included, from this The speckle image of measured object is gathered in structure light image, by the speckle image of measured object with reference speckle image according to pre-defined algorithm View data calculating is carried out, each speckle point for obtaining speckle image on measured object dissipates relative to reference to the reference in speckle image The displacement of spot.The depth value of each speckle point of speckle image is calculated using trigonometry conversion, and according to the depth Angle value obtains the depth information of measured object.
It is, of course, also possible to obtain the depth image by the method for binocular vision or based on jet lag TOF method Information etc., is not limited herein, as long as can obtain or belong to this by the method for the depth information that measured object is calculated The scope that embodiment includes.
After the color information that ISP processors 930 receive the measured object that imaging sensor 914 captures, it can be tested View data corresponding to the color information of thing is handled.ISP processors 930 are analyzed view data can be used for obtaining It is determined that and/or imaging device 910 one or more control parameters image statistics.Imaging sensor 914 may include color Color filter array (such as Bayer filters), imaging sensor 914 can obtain to be caught with each imaging pixel of imaging sensor 914 Luminous intensity and wavelength information, and provide one group of raw image data being handled by ISP processors 930.
ISP processors 930 handle raw image data pixel by pixel in various formats.For example, each image pixel can Bit depth with 8,10,12 or 14 bits, ISP processors 930 can be carried out at one or more images to raw image data Reason operation, image statistics of the collection on view data.Wherein, image processing operations can be by identical or different bit depth Precision is carried out.
ISP processors 930 can also receive pixel data from video memory 920.Video memory 920 can be memory device The independent private memory in a part, storage device or electronic equipment put, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving raw image data, ISP processors 930 can carry out one or more image processing operations.
After ISP processors 930 get color information and the depth information of measured object, it can be merged, obtained 3-D view.Wherein, can be extracted by least one of appearance profile extracting method or contour feature extracting method corresponding The feature of measured object.Such as pass through active shape model method ASM, active appearance models method AAM, PCA PCA, discrete The methods of cosine transform method DCT, the feature of measured object is extracted, is not limited herein.It will be extracted respectively from depth information again The feature of measured object and feature progress registration and the Fusion Features processing that measured object is extracted from color information.Herein refer to Fusion treatment can be the feature that will be extracted in depth information and color information directly combination or by different images Middle identical feature combines after carrying out weight setting, it is possibility to have other amalgamation modes, finally according to the feature after fusion, generation 3-D view.
The view data of 3-D view can be transmitted to video memory 920, to carry out other place before shown Reason.ISP processors 930 from the reception processing data of video memory 920, and to the processing data carry out original domain in and Image real time transfer in RGB and YCbCr color spaces.The view data of 3-D view may be output to display 960, for Family is watched and/or further handled by graphics engine or GPU (Graphics Processing Unit, graphics processor).This Outside, the output of ISP processors 930 also be can be transmitted to video memory 920, and display 960 can be read from video memory 920 View data.In one embodiment, video memory 920 can be configured as realizing one or more frame buffers.In addition, The output of ISP processors 930 can be transmitted to encoder/decoder 950, so as to encoding/decoding image data.The picture number of coding According to can be saved, and decompressed before being shown in the equipment of display 960.Encoder/decoder 950 can by CPU or GPU or Coprocessor is realized.
The image statistics that ISP processors 930 determine, which can be transmitted, gives the unit of control logic device 940.Control logic device 940 It may include the processor and/or microcontroller for performing one or more routines (such as firmware), one or more routines can be according to connecing The image statistics of receipts, determine the control parameter of imaging device 910.
It it is below the step of realizing scene display methods with image processing techniques in Fig. 8:
Projected, obtained each in scene to be shown described in Each point in time to scene to be shown using structure light device The 3D models of object;
For each object in the scene to be shown, the 3D models of object described in Each point in time are compared, Determine the attribute of the object;The attribute of the object includes:Rigid body or non-rigid;
Display type of the object in the scene to be shown is determined according to the attribute of the object;
The scene to be shown is shown according to the 3D models of each object and display type in the scene to be shown.
In order to realize above-described embodiment, the present invention also proposes a kind of non-transitorycomputer readable storage medium, deposited thereon Computer program is contained, can realize that scene as in the foregoing embodiment is shown when the computer program is executed by processor Method.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification Close and combine.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the invention, " multiple " are meant that at least two, such as two, three It is individual etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize custom logic function or process Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium, which can even is that, to print the paper of described program thereon or other are suitable Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage Or firmware is realized.Such as, if realized with hardware with another embodiment, following skill well known in the art can be used Any one of art or their combination are realized:With the logic gates for realizing logic function to data-signal from Logic circuit is dissipated, the application specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above Embodiments of the invention are stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of the invention Type.

Claims (14)

  1. A kind of 1. scene display methods, it is characterised in that including:
    Projected using structure light device to scene to be shown, obtain each object in scene to be shown described in Each point in time 3D models;
    For each object in the scene to be shown, the 3D models of object described in Each point in time are compared, it is determined that The attribute of the object;The attribute of the object includes:Rigid body or non-rigid;
    Display type of the object in the scene to be shown is determined according to the attribute of the object;
    The scene to be shown is shown according to the 3D models of each object and display type in the scene to be shown.
  2. 2. according to the method for claim 1, it is characterised in that described to be thrown using structure light device to scene to be shown Shadow, the 3D models of each object in scene to be shown described in Each point in time are obtained, including:
    For Each point in time, projected using structure light device to scene to be shown;
    The each object in the scene to be shown is imaged using video camera, obtained each in the scene to be shown The depth image of object;
    With reference to the position relationship between the depth image of each object, and the structure light device and the video camera, calculate Obtain the 3D models of each object.
  3. 3. method according to claim 1 or 2, it is characterised in that structure light caused by the structure light device is non-equal Even structure light.
  4. 4. according to the method for claim 1, it is characterised in that each object being directed in the scene to be shown, The 3D models of object described in Each point in time are compared, determine the attribute of the object, including:
    For each object in the scene to be shown, the 3D models of object described in Each point in time are analyzed, extracted The characteristic point information of the object;
    The characteristic point information of object described in Each point in time is compared, determines the deformation extent of the object;
    The attribute of the object is determined according to the deformation extent of the object.
  5. 5. according to the method for claim 4, it is characterised in that the deformation extent according to the object determines described right The attribute of elephant, including:
    If the deformation extent of the object is less than predetermined threshold value, it is determined that the attribute of the object is rigid body;
    If the deformation extent of the object is more than or equal to predetermined threshold value, it is determined that the attribute of the object is non-rigid.
  6. 6. according to the method for claim 1, it is characterised in that the attribute according to the object determines that the object exists Display type in the scene to be shown, including:
    If the attribute of the object is rigid body, it is determined that display type of the object in the scene to be shown is aobvious to strengthen Show;
    If the attribute of the object is non-rigid, it is determined that display type of the object in the scene to be shown is reduction Display.
  7. A kind of 7. scene display device, it is characterised in that including:
    Projection module, for being projected using structure light device to scene to be shown, obtain to be shown described in Each point in time The 3D models of each object in scene;
    Comparing module, for for each object in the scene to be shown, to the 3D models of object described in Each point in time It is compared, determines the attribute of the object;The attribute of the object includes:Rigid body or non-rigid;
    Determining module, for determining display type of the object in the scene to be shown according to the attribute of the object;
    Display module, for waiting to show described in being shown according to the 3D models of each object and display type in the scene to be shown Show scene.
  8. 8. device according to claim 7, it is characterised in that the projection module includes:
    Projecting cell, for for Each point in time, being projected using structure light device to scene to be shown;
    Image unit, for imaging each object in the scene to be shown using video camera, wait to show described in acquisition Show the depth image of each object in scene;
    Computing unit, for combining the depth image of each object, and between the structure light device and the video camera Position relationship, calculate the 3D models for obtaining each object.
  9. 9. the device according to claim 7 or 8, it is characterised in that structure light caused by the structure light device is non-equal Even structure light.
  10. 10. device according to claim 7, it is characterised in that the comparing module includes:
    Extraction unit, for for each object in the scene to be shown, to the 3D models of object described in Each point in time Analyzed, extract the characteristic point information of the object;
    Comparing unit, for the characteristic point information of object described in Each point in time to be compared, determine the deformation of the object Degree;
    Determining unit, for determining the attribute of the object according to the deformation extent of the object.
  11. 11. device according to claim 10, it is characterised in that the determining unit is specifically used for,
    When the deformation extent of the object is less than predetermined threshold value, the attribute for determining the object is rigid body;
    When the deformation extent of the object is more than or equal to predetermined threshold value, the attribute for determining the object is non-rigid.
  12. 12. device according to claim 7, it is characterised in that the determining module includes:
    First determining unit, for when the attribute of the object is rigid body, determining the object in the scene to be shown Display type for strengthen show;
    Second determining unit, for when the attribute of the object is non-rigid, determining the object in the scene to be shown In display type for reduction show.
  13. 13. a kind of terminal device, it is characterised in that including following one or more assemblies:Housing and in the housing Processor and memory, wherein, the processor is run by reading the executable program code stored in the memory Program corresponding with the executable program code, for realizing the scene display side as described in any in claim 1-6 Method.
  14. 14. a kind of non-transitorycomputer readable storage medium, is stored thereon with computer program, it is characterised in that the calculating The scene display methods as described in any in claim 1-6 is realized when machine program is executed by processor.
CN201710677492.1A 2017-08-09 2017-08-09 Scene display methods and device, terminal device Pending CN107657632A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710677492.1A CN107657632A (en) 2017-08-09 2017-08-09 Scene display methods and device, terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710677492.1A CN107657632A (en) 2017-08-09 2017-08-09 Scene display methods and device, terminal device

Publications (1)

Publication Number Publication Date
CN107657632A true CN107657632A (en) 2018-02-02

Family

ID=61128455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710677492.1A Pending CN107657632A (en) 2017-08-09 2017-08-09 Scene display methods and device, terminal device

Country Status (1)

Country Link
CN (1) CN107657632A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150363645A1 (en) * 2014-06-11 2015-12-17 Here Global B.V. Method and apparatus for roof type classification and reconstruction based on two dimensional aerial images
CN105827952A (en) * 2016-02-01 2016-08-03 维沃移动通信有限公司 Photographing method for removing specified object and mobile terminal
CN106210542A (en) * 2016-08-16 2016-12-07 深圳市金立通信设备有限公司 The method of a kind of photo synthesis and terminal
CN106203279A (en) * 2016-06-28 2016-12-07 广东欧珀移动通信有限公司 The recognition methods of destination object, device and mobile terminal in a kind of augmented reality
CN106407875A (en) * 2016-03-31 2017-02-15 深圳奥比中光科技有限公司 Target feature extraction method and apparatus
CN106453853A (en) * 2016-09-22 2017-02-22 深圳市金立通信设备有限公司 Photographing method and terminal
CN106603903A (en) * 2015-10-15 2017-04-26 中兴通讯股份有限公司 Photo processing method and apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150363645A1 (en) * 2014-06-11 2015-12-17 Here Global B.V. Method and apparatus for roof type classification and reconstruction based on two dimensional aerial images
CN106603903A (en) * 2015-10-15 2017-04-26 中兴通讯股份有限公司 Photo processing method and apparatus
CN105827952A (en) * 2016-02-01 2016-08-03 维沃移动通信有限公司 Photographing method for removing specified object and mobile terminal
CN106407875A (en) * 2016-03-31 2017-02-15 深圳奥比中光科技有限公司 Target feature extraction method and apparatus
CN106203279A (en) * 2016-06-28 2016-12-07 广东欧珀移动通信有限公司 The recognition methods of destination object, device and mobile terminal in a kind of augmented reality
CN106210542A (en) * 2016-08-16 2016-12-07 深圳市金立通信设备有限公司 The method of a kind of photo synthesis and terminal
CN106453853A (en) * 2016-09-22 2017-02-22 深圳市金立通信设备有限公司 Photographing method and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈继民: "《3D打印技术基础教程》", 31 January 2016 *

Similar Documents

Publication Publication Date Title
CN107563304A (en) Unlocking terminal equipment method and device, terminal device
CN108447017A (en) Face virtual face-lifting method and device
CN107465906B (en) Panorama shooting method, device and the terminal device of scene
CN107483845B (en) Photographic method and its device
CN107480613A (en) Face identification method, device, mobile terminal and computer-readable recording medium
CN107370958A (en) Image virtualization processing method, device and camera terminal
CN107479801A (en) Displaying method of terminal, device and terminal based on user's expression
CN107610077A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107481304A (en) The method and its device of virtual image are built in scene of game
CN107370950B (en) Focusing process method, apparatus and mobile terminal
CN107592449A (en) Three-dimension modeling method, apparatus and mobile terminal
CN107564050A (en) Control method, device and terminal device based on structure light
CN107493427A (en) Focusing method, device and the mobile terminal of mobile terminal
CN107491744A (en) Human body personal identification method, device, mobile terminal and storage medium
CN107483428A (en) Auth method, device and terminal device
CN107438161A (en) Shooting picture processing method, device and terminal
CN107517346A (en) Photographic method, device and mobile device based on structure light
CN107423716A (en) Face method for monitoring state and device
CN107463659A (en) Object search method and its device
CN107656611A (en) Somatic sensation television game implementation method and device, terminal device
CN107469355A (en) Game image creation method and device, terminal device
CN107481318A (en) Replacement method, device and the terminal device of user's head portrait
CN107820019A (en) Blur image acquiring method, device and equipment
CN107623814A (en) The sensitive information screen method and device of shooting image
CN107392874A (en) U.S. face processing method, device and mobile device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20180202

RJ01 Rejection of invention patent application after publication