WO2020029178A1 - 全景视频中虚拟对象的光影渲染方法、装置及电子设备 - Google Patents

全景视频中虚拟对象的光影渲染方法、装置及电子设备 Download PDF

Info

Publication number
WO2020029178A1
WO2020029178A1 PCT/CN2018/099636 CN2018099636W WO2020029178A1 WO 2020029178 A1 WO2020029178 A1 WO 2020029178A1 CN 2018099636 W CN2018099636 W CN 2018099636W WO 2020029178 A1 WO2020029178 A1 WO 2020029178A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
light
target
picture
coordinate
Prior art date
Application number
PCT/CN2018/099636
Other languages
English (en)
French (fr)
Inventor
菲永·奥利维尔
李建亿
Original Assignee
太平洋未来科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 太平洋未来科技(深圳)有限公司 filed Critical 太平洋未来科技(深圳)有限公司
Priority to PCT/CN2018/099636 priority Critical patent/WO2020029178A1/zh
Priority to CN201810975331.5A priority patent/CN109064544A/zh
Publication of WO2020029178A1 publication Critical patent/WO2020029178A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/04Supports for telephone transmitters or receivers

Definitions

  • the present invention relates to the field of image processing technology, and in particular, to a light and shadow rendering method, device, and electronic device for a virtual object in a panoramic video.
  • Virtual reality technology is designed to use computer technology to build a virtual world that has the same perception as the real world.
  • users can drag and watch the panoramic video at any angle of 360 degrees, so that they have a truly immersive feeling.
  • wearing VR glasses to watch will have a stronger sense of immersion.
  • the inventors found that in the related art, since the virtual objects are generated by a computer in advance, and the picture area of the panoramic video can be switched 360 degrees, it is impossible to obtain in advance which of the panoramic videos the user views Area, so you cannot add light and shadow effects to virtual objects in advance.
  • the light and shadow rendering method, device, and electronic device of a virtual object in a panoramic video provided by embodiments of the present invention are used to solve at least the foregoing problems in related technologies.
  • An embodiment of the present invention provides a method for rendering light and shadow of a virtual object in a panoramic video, including:
  • the current picture of the panoramic video on the display screen includes a preset virtual object feature point; if the virtual object feature point is included, the current picture is decomposed into a first preset number of sub-pictures; according to the sub-pictures Determine the light intensity weighting center of the image moment, determine the light source position information of the current picture based on the light intensity weighting center and the geometric center of the sub-picture; determine the outline information of the virtual object and Position information; generating light and shadow effects of the virtual object according to the light source position information and the contour information and position information of the virtual object.
  • the method further includes: setting a virtual object on a target frame picture of the panoramic video in advance, recording a first frame identifier of the target frame picture, and first coordinates of the feature points representing key features of the virtual object. And a correspondence between the first coordinate and the first frame identifier.
  • the determining whether the current picture of the panoramic video includes a preset virtual object feature point includes: obtaining a second frame identifier corresponding to the current picture; and matching the second frame identifier with the first frame identifier.
  • determining the contour information and position information of the virtual object based on the virtual object feature points includes: determining the contour information of the virtual object according to a target second coordinate within a coordinate range of the display screen; The center coordinates of the virtual object on the display screen are determined according to the contour information, and the center coordinates are used as position information of the virtual object.
  • determining the light source position information of the current picture based on the light intensity weighting center and the geometric center of the sub-picture includes: according to the light intensity weighting center of the sub-picture and the geometric center of the sub-picture Determining a light angle of the sub-picture; determining a weight value corresponding to each of the sub-pictures; and weighting and summing a vector of light angles of each of the sub-pictures according to the weight value to obtain a light angle corresponding to the current picture Determining the light source position information of the current picture according to the light angle and the pixel value of the current picture.
  • Another aspect of the embodiments of the present invention provides a light and shadow rendering device for a virtual object in a panoramic video, including:
  • the apparatus further includes a recording module for setting a virtual object on a target frame picture of the panoramic video in advance, recording a first frame identifier of the target frame picture, and the feature representing a key feature of the virtual object.
  • the judging module includes: an obtaining unit for obtaining a second frame identifier corresponding to the current picture; a matching unit for matching the second frame identifier with the first frame identifier to determine an The second frame identifier matches the first frame identifier of the target, and determines the first coordinate of the target corresponding to the first frame identifier according to the corresponding relationship; a conversion unit, configured to convert the first target according to a preset model The coordinates are converted into the target second coordinates corresponding to the display screen; a determining unit is configured to determine whether the target second coordinates are within the display screen coordinate range; a determining unit is configured to be within the display screen coordinate range if To determine that the current picture of the panoramic video includes a preset virtual object feature point.
  • the second determining module is further configured to determine outline information of the virtual object according to a second coordinate of the target within a coordinate range of the display screen; and determine that the virtual object is in the Display the center coordinates of the screen, and use the center coordinates as the position information of the virtual object.
  • the first determining module is further configured to determine a light angle of the sub-picture according to a light intensity weighting center of the sub-picture and a geometric center of the sub-picture; and determine a weight value corresponding to each of the sub-pictures. ; Weighting and summing the vector of the light angles of the sub-pictures according to the weight value to obtain the light angle corresponding to the current picture; determining the current picture according to the light angle and the pixel value of the current picture Light source position information.
  • Another aspect of the embodiments of the present invention provides an electronic device, including: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores a memory that can be used by the at least one processor An executed instruction, where the instruction is executed by the at least one processor, so that the at least one processor can execute the light and shadow rendering method of the virtual object in the panoramic video.
  • the electronic device is a mobile phone
  • the mobile phone includes a front panel with a display screen and a mobile phone back cover; a middle and lower part of the mobile phone back cover is provided with a recessed area, and the recessed area is installed with the recessed area
  • a supporting plate adapted to the shape and size; the upper end of the supporting plate is hinged with the recessed area so that the supporting plate can be rotated to a predetermined angle with the back cover of the mobile phone; the lower end of the supporting plate is provided There is a connecting piece, one end of the connecting piece is soft-connected to the support plate, and the other end is provided with a plugging portion adapted to the shape and size of the mobile phone charging interface, and the plugging portion is mated with the mobile phone charging interface;
  • the support plate includes a first plate body, a second plate body, a first connection plate, a second connection plate, and a link, the first plate body, the first connection plate, the second connection plate, and the second The plates are sequentially connected to form a plate; the first plate has opposite first and second ends, the second plate has opposite third and fourth ends, and the link includes a first A rod body and a second rod body, opposite ends of the first rod body are hinged to the first end and the third end, respectively, and opposite ends of the second rod body are respectively connected to the second end and the fourth end End-to-phase hinge, wherein the first rod body and the second rod body are both disposed on a side of the support plate away from the front panel; the first connection plate and the second connection plate are located on the first Between the plate body and the second plate body, one side of the first connection plate is hinged to one side of the first plate body, and one side of the second connection plate is opposite to one side of the second plate body.
  • the other side of the first connection plate is hinged with the other side of the second connection plate, the first connection plate and the second connection plate
  • the first plate is provided on a side of the support plate near the front plate; the first plate is provided with a first portion disposed along a thickness direction of the first plate, and the first portion is located at a hinge between the first plate and the connecting rod.
  • the second plate body is provided with a second portion provided along its thickness direction, and the second portion is located on the second plate
  • the first portion, the second portion, the first connecting plate, the second connecting plate, and the connecting rod form five Rod body mechanism.
  • the recessed area includes a bottom wall and a side wall, and the bottom wall and / or the side wall is provided with a ventilation structure, and the ventilation structure is a plurality of ventilation holes or ventilation grilles.
  • the connecting member is made of rubber.
  • each of the first end, the second end, the third end, and the fourth end is provided with a recessed portion, and opposite ends of the first rod body are respectively disposed in the recessed portion of the first end and the first end.
  • the opposite ends of the second rod body are respectively placed in the recessed portion of the third end and the recessed portion of the fourth end.
  • the light and shadow rendering method, device, and electronic device of the virtual object in the panoramic video provided by the embodiments of the present invention can determine in real time whether the currently played panoramic video includes the virtual object, and generate a virtual object according to the position of the light source in the video.
  • the light and shadow effects of the object so that the light and shadow effects of the virtual object can be consistent with the scene in the video.
  • the mobile phone with the support plate provided by the embodiment of the present invention can make the mobile phone play VR video more smoothly, and the user does not need to hold the mobile phone for a long time to watch. At the same time, the user does not need to use the mobile phone protective shell and paste parts on the mobile phone, and the appearance of the mobile phone is more Beautiful, more efficient heat dissipation.
  • FIG. 1 is a flowchart of a light and shadow rendering method of a virtual object in a panoramic video according to an embodiment of the present invention
  • step S101 is a flowchart of step S101 in a method for rendering light and shadow of a virtual object in a panoramic video according to an embodiment of the present invention
  • FIG. 3 is a structural diagram of a light and shadow rendering device for a virtual object in a panoramic video according to an embodiment of the present invention
  • FIG. 4 is a structural diagram of a light and shadow rendering device for a virtual object in a panoramic video according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a hardware structure of an electronic device that executes a method for rendering light and shadow of a virtual object in a panoramic video according to an embodiment of the method of the present invention
  • FIG. 6 is a schematic structural diagram of a mobile phone for obtaining a target picture in a light and shadow rendering method of a virtual object in a panoramic video according to an embodiment of the present invention
  • FIG. 7 is an exploded view of a mobile phone support plate for obtaining a target picture in a light and shadow rendering method for a virtual object in a panoramic video according to an embodiment of the present invention
  • FIG. 8 is a schematic diagram of a mobile phone support plate in a supported state for obtaining a target picture in a light and shadow rendering method for a virtual object in a panoramic video according to an embodiment of the present invention
  • FIG. 9 is an enlarged view of part A of FIG. 9; FIG.
  • FIG. 10 is an enlarged view of a portion B of FIG. 9;
  • FIG. 11 is a schematic diagram of a mobile phone support plate in a folded state for obtaining a target picture in a light and shadow rendering method for a virtual object in a panoramic video according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a light and shadow rendering method of a virtual object in a panoramic video according to an embodiment of the present invention.
  • a light and shadow rendering method for a virtual object in a panoramic video provided by an embodiment of the present invention includes:
  • S101 Determine whether the current picture of the panoramic video on the display screen includes a preset virtual object feature point.
  • a virtual object may be set on the target frame picture of the panoramic video in advance, and the first frame identifier of the target frame picture, the first coordinates of the feature points representing the key features of the virtual object, and the The correspondence between the first coordinate and the first frame identifier.
  • virtual objects are generally set in certain scenes in a panoramic video, and the virtual objects interact with the plot in the video.
  • Each scene generally corresponds to several consecutive video frames of the panoramic video. These video frames are the target frames.
  • the frame identifier can be a frame number or other characteristic information that can uniquely identify the frame.
  • the virtual object is composed of several feature points. These feature points represent the key points that can identify the virtual object. That is, the feature points can determine the outline and basic characteristics of the virtual object.
  • the first coordinates of the feature points of the virtual object are the coordinate positions of several feature points constituting the virtual object in the panoramic video.
  • this step includes the following sub-steps:
  • S1011 Obtain a second frame identifier corresponding to the current picture.
  • the frame identifier corresponding to the current picture that is, the second frame identifier is obtained in real time or every preset time period.
  • S1012 Match the second frame identifier with the first frame identifier, determine a target first frame identifier that matches the second frame identifier, and determine that the target first frame identifier corresponds to the target relationship according to the correspondence relationship.
  • the first coordinate of the target is
  • Target first coordinates are the positions of the feature points on the target frame screen that constitute the virtual object.
  • the virtual objects are the virtual objects set on the target frame screen where the target first frame identifier is located. These The feature points represent the outline and basic features of the virtual object.
  • S1013 Convert the first coordinate of the target into the second coordinate of the target corresponding to the display screen according to a preset model.
  • the target of the first feature point coordinates (x 1, y 1) may be the first two-dimensional coordinates of the target feature point of the panoramic video (x 1, y 1 ) Is converted to its target second coordinate (x 2 , y 2 ) in the display screen.
  • M 1 is a transformation matrix that transforms the coordinates of the original two-dimensional panoramic video into coordinates of a spherical video source.
  • the matrix M 1 belongs to common knowledge in the art, and is not repeated here.
  • M 3 is a matrix related to the resolution of the display screen, which can convert the coordinates of the viewing plane into the coordinates of the display screen.
  • the above content is continuously repeated, and the target first coordinates of the feature points constituting the virtual object are all converted into the target second coordinates.
  • S1014 Determine whether the second coordinate of the target is within the coordinate range of the display screen.
  • the panoramic video is a 360-degree video
  • only a part of the screen is displayed on the display screen.
  • step S1015 is executed to indicate that a virtual object or a part of the virtual object will appear on the current screen on the display screen.
  • the current picture is decomposed into a first preset number of small square grid elements composed of a preset size, and each small square grid element is used as a sub-picture, where the sub-picture is formed by the second A preset number of pixels.
  • the current picture can be decomposed into C columns and R rows (where C and R are integers, and C ⁇ 2, R ⁇ 2) to obtain multiple sub-pictures, each of which consists of p pixels.
  • S103 Determine a light intensity weighting center according to the image moment of the sub-picture, and determine light source position information of the current picture based on the light intensity weighting center and the geometric center of the sub-picture.
  • An image moment is a set of moments calculated from digital graphics. It usually describes the global features of the image and provides a lot of information about the different types of geometric features of the image, such as size, position, direction, and shape. For example: First-order moments are related to shape; second-order moments show the degree of expansion of a curve around a straight line average; third-order moments are a measure of the symmetry of the mean; second- and third-order moments can be used to derive a set of seven
  • Moment invariant and invariant moment are the statistical characteristics of the image, which meet the invariant invariance of translation, expansion and rotation.
  • geometric invariant moment can be used as an important feature to represent an object.
  • the above-mentioned first, second, third and third moments and the seven invariant moments derived from the second and third moments all have specific calculation formulas. According to these formulas, determining the light intensity weighting center belongs to common technical knowledge in the field. The present invention will not repeat them here.
  • the light intensity weighting center g is determined by the image moment, and its coordinate position in the sub-image is (x g , y g ).
  • the light angle of the sub-picture is determined according to its light intensity weighted center and geometric center. Specifically, determine the coordinate position (x c , y c ) of the geometric center c in the sub-picture, and compare the coordinate position of the geometric center c with the coordinate position of the light intensity weighted center g, from the geometric center c to the weighted center g
  • Vector of The direction of is the result of being affected by light, which provides us with a local light effect indication of the current picture position where the sub-picture is located.
  • vectors Represents the direction of the light
  • d represents the modulus of the vector cg ⁇ represents the light angle of the sub-picture.
  • a weight value corresponding to each sub-picture may be determined according to a vector mode of a light angle corresponding to each sub-picture, and / or a light angle, and / or previous experience.
  • the light source position information of the current picture is determined according to the light angle and the pixel value of the current picture. Specifically, the pixel values along the ray angle and points near the ray angle in the current picture are acquired to obtain the brightness distribution along the ray angle. The closer the pixel value is to zero, the darker the color of the pixel; the closer the pixel value is to the maximum value of 255, the brighter the color of the pixel. Therefore, by comparing the pixel values of these points, The point where the pixel value is the largest is inferred as the coordinate position of the light source, and optionally, the sitting mark of the light source may be (x l , y l ).
  • these feature points are connected to determine the contour information of the virtual object; and then the virtual object is determined to be on the display screen according to the contour information.
  • the center coordinate of the center of the virtual object as the position of the virtual object in the current screen of the display screen.
  • S105 Generate light and shadow effects of the virtual object according to the light source position information and the contour information and position information of the virtual object.
  • the position of the shadow display can be determined according to the position of the light source and the position of the virtual object, and the display shape of the shadow is obtained according to the contour information of the virtual object.
  • the combination of the two generates the light and shadow effect of the virtual object.
  • the environment around the virtual object (such as surrounding buildings, surrounding objects, etc.) in the current screen will affect its shadow shape, so the environmental information around the virtual object, such as the surrounding environment, can be obtained in advance
  • the relative position with the virtual object, the shape information of the surrounding objects, etc., and the corresponding shadow environment adjustment factor is determined according to these information, which is used to adjust the shadow shape of the virtual object.
  • a light and shadow effect of the virtual object is generated according to the center position of the shadow and the shape of the shadow. Specifically, the center point of the shadow shape is determined first, and the center point of the shadow shape is placed at the center coordinate of the virtual object on the display screen, thereby determining the position of the shadow in the current picture, and generating the light and shadow effect of the virtual object.
  • the light and shadow rendering method for a virtual object in a panoramic video provided by the embodiment of the present invention can determine in real time whether a currently played panoramic video includes a virtual object, and generate a light and shadow effect of the virtual object according to the position of a light source in the video, so that the light and shadow of the virtual object The effect can be consistent with the scene in the video.
  • FIG. 3 is a structural diagram of a light and shadow rendering device for a virtual object in a panoramic video according to an embodiment of the present invention.
  • the device specifically includes a determination module 100, a decomposition module 200, a first determination module 300, a second determination module 400, and a generation module 500. among them,
  • a judging module 100 is configured to judge whether the current picture of the panoramic video on the display screen includes a preset virtual object feature point; a decomposition module 200 is configured to, if the virtual object feature point is included, decompose the current picture into a first A preset number of sub-pictures; a first determining module 300, configured to determine a light intensity weighting center according to the image moment of the sub-picture, and determine the current picture based on the light intensity weighting center and the geometric center of the sub-picture Light source position information; a second determining module 400 for determining outline information and position information of the virtual object based on the virtual object feature points; a generating module 500 for determining the light source position information and the outline of the virtual object Information and position information to generate the light and shadow effects of the virtual object.
  • the light and shadow rendering device for a virtual object in a panoramic video provided by an embodiment of the present invention is specifically configured to execute the method provided in the embodiment shown in FIG. 1, and its implementation principles, methods, and functional uses are similar to the embodiment shown in FIG. 1, This will not be repeated here.
  • FIG. 4 is a structural diagram of a light and shadow rendering device for a virtual object in a panoramic video according to an embodiment of the present invention.
  • the device specifically includes a recording module 600, a judgment module 100, a decomposition module 200, a first determination module 300, a second determination module 400, and a generation module 500. among them,
  • the recording module 600 is configured to set a virtual object on a target frame picture of a panoramic video in advance, and record a first frame identifier of the target frame picture, first coordinates of feature points representing key features of the virtual object, and the The correspondence between the first coordinate and the first frame identifier; a judging module 100 for judging whether the current image of the panoramic video on the display screen includes a preset virtual object feature point; a decomposition module 200 for The object feature point, the current picture is decomposed into a first preset number of sub-pictures; a first determining module 300 is configured to determine a light intensity weighting center according to an image moment of the sub-picture, The light intensity weighting center and the geometric center determine light source position information of the current picture; a second determining module 400 is configured to determine contour information and position information of the virtual object based on the virtual object feature points; a generating module 500, Based on the light source position information and the contour information and position information of the virtual object, a light and shadow effect of the virtual object is generated.
  • the determination module 100 includes: an obtaining unit 110, a matching unit 120, a conversion unit 130, a determination unit 140, and a determination unit 150. among them,
  • the obtaining unit 110 is configured to obtain a second frame identifier corresponding to the current picture;
  • the matching unit 120 is configured to match the second frame identifier with the first frame identifier, and determine to match the second frame identifier.
  • the first frame identifier of the target and determine the first coordinate of the target corresponding to the first frame identifier of the target according to the corresponding relationship;
  • the conversion unit 130 is configured to convert the first coordinate of the target into the display according to a preset model Target second coordinates corresponding to the screen; a determining unit 140 for determining whether the target second coordinates are within the display screen coordinate range; a determining unit 150 for determining a panoramic video if it is within the display screen coordinate range
  • the current picture includes preset virtual object feature points.
  • the second determining module 400 is further configured to determine outline information of the virtual object according to the second coordinate of the target within the coordinate range of the display screen; and determine that the virtual object is in the Display the center coordinates of the screen, and use the center coordinates as the position information of the virtual object.
  • the first determining module 300 is further configured to determine a light angle of the sub-picture according to a light intensity weighting center of the sub-picture and a geometric center of the sub-picture; determine a weight value corresponding to each of the sub-pictures ; Weighting and summing the vector of the light angles of the sub-pictures according to the weight value to obtain the light angle corresponding to the current picture; determining the current picture according to the light angle and the pixel value of the current picture Light source position information.
  • the light and shadow rendering device for a virtual object in a panoramic video provided by an embodiment of the present invention is specifically configured to execute the method provided by the embodiment shown in FIG. 1 and FIG. 2, and its implementation principles, methods, and functional uses are shown in FIG. 1 and FIG. 2. The embodiments are similar and will not be repeated here.
  • the light and shadow rendering device of the virtual object in the panoramic video according to the embodiments of the present invention described above can be used as one of the software or hardware functional units, which can be independently set in the above-mentioned electronic device, or can be used as one of the functional modules integrated in the processor to execute the present invention.
  • Light and shadow rendering method of virtual object in panoramic video according to the embodiment of the invention can be used as one of the software or hardware functional units, which can be independently set in the above-mentioned electronic device, or can be used as one of the functional modules integrated in the processor to execute the present invention.
  • FIG. 5 is a schematic diagram of a hardware structure of an electronic device that executes a method for rendering light and shadow of a virtual object in a panoramic video provided by a method embodiment of the present invention.
  • the electronic device includes:
  • One or more processors 510 and a memory 520 are taken as an example in FIG. 5.
  • the apparatus for performing the light and shadow rendering method for a virtual object in a panoramic video may further include an input device 530 and an output device 530.
  • the processor 510, the memory 520, the input device 530, and the output device 540 may be connected through a bus or in other manners. In FIG. 5, the connection through the bus is taken as an example.
  • the memory 520 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules, such as virtual objects in the panoramic video in the embodiment of the present invention.
  • the processor 510 executes various functional applications and data processing of the server by running non-volatile software programs, instructions, and modules stored in the memory 520, that is, a light and shadow rendering method for virtual objects in the panoramic video.
  • the memory 520 may include a storage program area and a storage data area, where the storage program area may store an operating system and application programs required for at least one function; the storage data area may store light and shadow of a virtual object in a panoramic video according to an embodiment of the present invention Use of data created by the rendering device, etc.
  • the memory 520 may include a high-speed random access memory 520, and may further include a non-volatile memory 520, for example, at least one magnetic disk memory 520, a flash memory device, or other non-volatile solid-state memory 520.
  • the memory 520 may optionally include a memory 520 remotely disposed relative to the processor 55, and these remote memories 520 may be connected to the light and shadow rendering device of the virtual object in the panoramic video through a network.
  • Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the input device 530 may receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of a light and shadow rendering device of a virtual object in a panoramic video.
  • the input device 530 may include a device such as a pressing module.
  • the one or more modules are stored in the memory 520, and when executed by the one or more processors 510, perform a light and shadow rendering method of a virtual object in the panoramic video.
  • the electronic devices in the embodiments of the present invention exist in various forms, including but not limited to:
  • Mobile communication equipment This type of equipment is characterized by mobile communication functions, and its main goal is to provide voice and data communication.
  • Such terminals include: smart phones (such as iPhone), multimedia phones, feature phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has the characteristics of mobile Internet access.
  • Such terminals include: PDA, MID and UMPC devices, such as iPad.
  • the electronic device in this embodiment may be a mobile phone with a support structure.
  • the mobile phone may be used to perform a method for rendering light and shadow of a virtual object in a panoramic video in the foregoing embodiment, and play the method.
  • VR panoramic video The mobile phone is provided with a support plate on the back cover, which can make the mobile phone stably supported and stable enough when watching videos, and at the same time avoid the user using a mobile phone protective shell with a stand and an external sticky stand, which facilitates the heat dissipation of the mobile phone and improves the aesthetics of the mobile phone. Fatigue caused by users holding the phone for a long time.
  • the mobile phone includes a front panel with a display screen (not shown in the figure) and a mobile phone back cover 1000; a middle and lower part of the mobile phone back cover 1000 is provided with a recessed area 1100, and the recessed area 1100 is inside
  • a support plate 2000 adapted to the shape and size of the recessed area 1100 is installed; the upper end of the support plate 2000 is hinged to the recessed area 1100 so that the support plate 2000 can be rotated to the mobile phone back cover 1000 is a preset angle.
  • the upper end of the support plate 2000 in this embodiment may have a rotating shaft, and the upper end of the recessed portion has a shaft hole adapted to the rotating shaft. The cooperation between the rotating shaft and the shaft hole is used to realize the support plate 2000. It can be rotated, of course, other rotating mechanisms with simpler structures are also within the optional range of this embodiment.
  • a connection piece 3000 is provided at the lower end of the support plate 2000.
  • One end of the connection piece 3000 is soft-connected to the support plate 2000, and the other end has A plugging portion 310 adapted to the shape and size of the mobile phone charging interface 4000.
  • the plugging portion 310 is mated with the mobile phone charging interface 4000.
  • the connecting member 3000 in this embodiment is made of rubber. The use of rubber not only has good deformation ability, but also has low cost and is easy to implement.
  • a rotatable support plate is provided in the recessed area of the back cover of the mobile phone, which can support the mobile phone side by side to meet the user's requirements.
  • the lower end of the support plate is fixed by connecting with the charging interface of the mobile phone, and it can also protect the charging interface and improve the functionality of the support plate.
  • the expansion of the support plate in this embodiment is also very convenient. It only needs to be pulled out from the charging port of the mobile phone.
  • the existing support plate cannot play the role of fixing the mobile phone. Therefore, the inventor made further improvements to the above-mentioned support plate structure.
  • the support plate 2000 in this embodiment specifically includes a first plate body 2100, a second plate body 2200, a first connection plate 2300, a second connection plate 2400, and a connecting rod 2500.
  • the first plate body 2100, the first connection plate 2300, the second connection plate 2400, and the second plate body 2200 are sequentially connected to form a plate body.
  • the first plate body 2100 has opposite first ends and second ends (i.e., FIG.
  • the second plate body 2200 has opposite third and fourth ends (ie, the upper and lower ends in the figure), and the link 2500 includes a first rod body 2510 and a second rod body 2520, opposite ends of the first rod body 2510 are hinged to the first end and the third end, respectively, and opposite ends of the second rod body 2520 are respectively connected to the second end and the fourth end Phase articulation, wherein the first rod body 2510 and the second rod body 2520 are both disposed on a side of the support plate away from the front panel.
  • the number of the connecting rods 2500 in this embodiment may also be one, and two ends of one connecting rod 2500 are respectively connected to the first and second ends, or two ends of one connecting rod 2500 are respectively connected to the first and second ends.
  • the three ends are connected to the fourth end, or one end of a link 2500 is connected to the middle of one side of the first plate 2100, and the other end is connected to the middle of one side of the second plate 2200.
  • the first connection The plate 2300 and the second connection plate 2400 are separate structures, that is, the first connection plate 2300 and the second connection plate 2400 are each composed of two plate-shaped members.
  • Above a link 2500 is the first connection plate 2300 and the second connection One plate-like member of the plate 2400, and below the link 2500 is another plate-like member of the first connection plate 2300 and the second connection plate 2400.
  • first connection plate 2300 and the second connection plate 2400 in this embodiment are located between the first plate body 2100 and the second plate body 2200, and one side of the first connection plate 2300
  • One side of the first plate body 2100 is hinged
  • one side of the second connection plate 2400 is hinged with one side of the second plate body 2200
  • the other side of the first connection plate 2300 is connected to the second plate
  • the other side of the connection plate 2400 is hinged, wherein the first connection plate 2300 and the second connection plate 2400 are disposed on a side of the support plate 2000 near the front panel.
  • the first plate body 2100 in this embodiment is provided with a first portion 2110 disposed along a thickness direction thereof, and the first portion 2110 is located in the first plate body 2100 and the connecting rod 2500.
  • the second plate body 2200 is provided with a second portion 2210 disposed along the thickness direction of the second plate body 2200, and the second portion 2210 is located between the hinge between the second plate body 2200 and the connecting rod 2500 and the hinge between the second plate body 2200 and the second connecting plate 2400; the first portion 2110, the second portion 2210, the first portion A connecting plate 2300, a second connecting plate 2400, and a connecting rod 2500 form a five-bar mechanism.
  • the above structure forms a five-bar mechanism through the first portion 2110, the second portion 2210, the first connection plate 2300, the second connection plate 2400, and the connecting rod 2500.
  • the entire support plate 2000 structure is rotated to a certain angle.
  • the first rod body 2510 and the second rod body 2520 are both disposed on a side of the support plate 2000 away from the front panel, and the first connection plate 2300 and the second connection plate 2400 are disposed on a side of the support plate 2000 near the front panel.
  • the five-link 2500 mechanism is at the first dead point position.
  • the first plate 2100 and the first plate 2100, the first connection plate 2300, and the second connection The plate 2400 cannot be rotated, and the dead point position can only be opened by an external force near the front panel, which can ensure that the support plate 2000 is always a plate structure, which provides a stable support for the mobile phone.
  • the first connecting plate and the second plate 2200 can be folded.
  • the first connecting plate 2300 and the first connecting plate 2300 are rotated to the second dead point and the first connecting plate 2300.
  • the second connecting plate 2400 can form a finger receiving portion for inserting a finger into the finger receiving portion, and can fix the mobile phone on a human hand to prevent the user from being crowded and colliding during use, causing the mobile phone to fall.
  • the support plate 2000 not only plays a supporting role, but also plays a role of fixing the mobile phone, which greatly improves its functionality.
  • phase articulation between the two components described above may be provided with a shaft hole in one component, and a rotation shaft, a rotation shaft and a shaft hole matched with the shaft hole in the other component. Together, they form a rotating mechanism so that the two components can rotate relative to each other.
  • the material of the first connection plate 2300 and the second connection plate 2400 can be selected to have a certain deformability, such as soft plastic.
  • a certain deformability such as soft plastic.
  • the first end, the second end, the third end, and the fourth end of the embodiment are each provided with a recess 2600, and the opposite ends of the first rod body 2510 are respectively disposed in the recesses.
  • opposite ends of the second rod body 2520 are respectively placed in the recessed portion 2600 of the third end and the recessed portion 2600 of the fourth end.
  • the connecting rod 2500 mechanism is provided in the board body through the recessed portion, so that the connecting rod 2500 mechanism, the board body, and the connecting plate together form a board body structure, which improves the integrity of the entire supporting plate 2000 and facilitates the installation of the supporting plate 2000.
  • the recessed area 1100 specifically includes a bottom wall and a side wall.
  • the bottom wall is provided with a ventilation structure 1110, and the ventilation structure 1110 may be a plurality of The ventilation hole and the ventilation structure 1110 may be provided with a ventilation grille on the bottom wall.
  • the design of the ventilation holes and the ventilation grille can enhance the heat dissipation effect of the mobile phone when the support plate 2000 is in a supported state or a folded state, thereby extending the service life of the mobile phone.
  • the device embodiments described above are only schematic, and the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, which may be located in One place, or can be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the objective of the solution of this embodiment. Those of ordinary skill in the art can understand and implement without creative labor.
  • An embodiment of the present invention provides a non-transitory computer-readable storage storage medium, where the computer storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by an electronic device, the electronic device is caused
  • the light and shadow rendering method of a virtual object in a panoramic video in any of the foregoing method embodiments is performed on the above.
  • An embodiment of the present invention provides a computer program product, wherein the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, and when the program instructions When executed by an electronic device, the electronic device is caused to execute a light and shadow rendering method of a virtual object in a panoramic video in any of the foregoing method embodiments.
  • each embodiment can be implemented by means of software plus a necessary universal hardware platform, and of course, also by hardware.
  • the above-mentioned technical solution in essence or a part that contributes to the existing technology may be embodied in the form of a software product, and the computer software product may be stored in a computer-readable storage medium, the computer-readable record A medium includes any mechanism for storing or transmitting information in a form readable by a computer (eg, a computer).
  • machine-readable media include read-only memory (ROM), random-access memory (RAM), disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagating signals (e.g., carrier waves , Infrared signals, digital signals, etc.), the computer software product includes a number of instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute various embodiments or certain parts of the embodiments Methods.

Abstract

本发明实施例提供一种全景视频中虚拟对象的光影渲染方法、装置及电子设备,包括:判断全景视频在显示屏幕的当前画面是否包括预设的虚拟对象特征点;若包括所述虚拟对象特征点,将将所述当前画面分解成第一预设数量的子图片;根据所述子图片的图像矩确定光强加权中心,基于所述子图片的所述光强加权中心和几何中心确定所述当前画面的光源位置信息;基于所述虚拟对象特征点确定所述虚拟对象的轮廓信息和位置信息;根据所述光源位置信息和所述虚拟对象的轮廓信息、位置信息,生成所述虚拟对象的光影效果。能够实时确定当前播放的全景视频中是否包括虚拟对象,根据视频中光源位置来生成虚拟对象的光影效果。

Description

全景视频中虚拟对象的光影渲染方法、装置及电子设备 技术领域
本发明涉及图像处理技术领域,尤其涉及一种全景视频中虚拟对象的光影渲染方法、装置及电子设备。
背景技术
虚拟现实技术旨在使用计算机技术构建一个与现实世界具有相同感知的虚拟世界,用户观看全景视频的时候可以360度任意角度拖动观看全景视频,使其有一种真正意义上身临其境的感觉,另外通过佩戴VR眼镜观看会有更强的沉浸感。然而在实现本发明的过程中,发明人发现相关技术中,由于虚拟对象都是预先通过计算机生成的,且全景视频的画面区域是可以360度切换的,无法预先获取用户观看全景视频中的哪个区域,从而不能预先给虚拟对象添加光影效果。
由于成本高昂,越来越多的移动设备用户不再直接通过专门的VR头盔观看VR视频,而是通过简易的VR镜片直接观看手机或电视屏幕播放的VR视频。为了避免用户长时间手持手机观看视频导致体力不支,往往需要将手机安装于支架上以保持稳定,普通用户往往使用手机简易支架就能达到所需的稳定支撑的效果,而现有的简易支架通常位于手机保护壳上、或通过强力胶粘贴在手机背部,因为手机玻璃日益坚固、手机散热问题、手机美观度的原因,用户往往不愿意使用手机保护壳和胶粘的简易支架。
发明内容
本发明实施例提供的全景视频中虚拟对象的光影渲染方法、装置及电子设备,用以至少解决相关技术中的上述问题。
本发明实施例一方面提供了一种全景视频中虚拟对象的光影渲染方法,包括:
判断全景视频在显示屏幕的当前画面是否包括预设的虚拟对象特征点;若包括所述虚拟对象特征点,将将所述当前画面分解成第一预设数量的子图片;根据所述子图片的图像矩确定光强加权中心,基于所述子图片的所述光强加权中心和几何中心确定所述当前画面的光源位置信息;基于所述虚拟对象特征点确定所述虚拟对象的轮廓信息和位置信息;根据所述光源位置信息和所述虚拟对象的轮廓信息、位置信 息,生成所述虚拟对象的光影效果。
进一步地,所述方法还包括:预先在全景视频的目标帧画面上设置虚拟对象,记录所述目标帧画面的第一帧标识、所述代表所述虚拟对象关键特征的特征点的第一坐标以及所述第一坐标与所述第一帧标识的对应关系。
进一步地,所述判断全景视频当前画面是否包括预设的虚拟对象特征点,包括:获取所述当前画面对应的第二帧标识;将所述第二帧标识与所述第一帧标识进行匹配,确定与所述第二帧标识匹配的目标第一帧标识,并根据所述对应关系确定所述目标第一帧标识对应的目标第一坐标;按照预设模型将所述目标第一坐标转化为所述显示屏幕对应的目标第二坐标;判断所述目标第二坐标是否处于所述显示屏幕坐标范围内;若处于所述显示屏幕坐标范围内,确定全景视频当前画面包括预设的虚拟对象特征点。
进一步地,所述基于所述虚拟对象特征点确定所述虚拟对象的轮廓信息和位置信息,包括:根据处于所述显示屏幕坐标范围内的目标第二坐标,确定所述虚拟对象的轮廓信息;根据所述轮廓信息确定所述虚拟对象在所述显示屏幕的中心坐标,将所述中心坐标作为所述虚拟对象的位置信息。
进一步地,所述基于所述子图片的所述光强加权中心和几何中心确定所述当前画面的光源位置信息,包括:根据所述子图片的光强加权中心与所述子图片的几何中心确定所述子图片的光线角度;确定各所述子图片对应的权重值;根据所述权重值对各所述子图片的光线角度的向量进行加权求和,得到所述当前画面对应的光线角度;根据所述光线角度和所述当前画面的像素值确定所述当前画面的光源位置信息。
本发明实施例的另一方面提供了一种全景视频中虚拟对象的光影渲染装置,包括:
判断模块,用于判断全景视频在显示屏幕的当前画面是否包括预设的虚拟对象特征点;分解模块,用于若包括所述虚拟对象特征点,将将所述当前画面分解成第一预设数量的子图片;第一确定模块,用于根据所述子图片的图像矩确定光强加权中心,基于所述子图片的所述光强加权中心和几何中心确定所述当前画面的光源位置信息;第二确定模块,用于基于所述虚拟对象特征点确定所述虚拟对象的轮廓信息和位置信息;生成模块,用于根据所述光源位置信息和所述虚拟对象的轮廓信息、位置信息,生成所述虚拟对象的光影效果。
进一步地,所述装置还包括:记录模块,用于预先在全景视频的目标帧画面上设置虚拟对象,记录所述目标帧画面的第一帧标识、所述代表所述虚拟对象关键特征的特征点的第一坐标以及所述第一坐标与所述第一帧标识的对应关系。
进一步地,所述判断模块包括:获取单元,用于获取所述当前画面对应的第二帧标识;匹配单元,用于将所述第二帧标识与所述第一帧标识进行匹配,确定与所述第二帧标识匹配的目标第一帧标识,并根据所述对应关系确定所述目标第一帧标识 对应的目标第一坐标;转化单元,用于按照预设模型将所述目标第一坐标转化为所述显示屏幕对应的目标第二坐标;判断单元,用于判断所述目标第二坐标是否处于所述显示屏幕坐标范围内;确定单元,用于若处于所述显示屏幕坐标范围内,确定全景视频当前画面包括预设的虚拟对象特征点。
进一步地,所述第二确定模块还用于,根据处于所述显示屏幕坐标范围内的目标第二坐标,确定所述虚拟对象的轮廓信息;根据所述轮廓信息确定所述虚拟对象在所述显示屏幕的中心坐标,将所述中心坐标作为所述虚拟对象的位置信息。
进一步地,所述第一确定模块还用于,根据所述子图片的光强加权中心与所述子图片的几何中心确定所述子图片的光线角度;确定各所述子图片对应的权重值;根据所述权重值对各所述子图片的光线角度的向量进行加权求和,得到所述当前画面对应的光线角度;根据所述光线角度和所述当前画面的像素值确定所述当前画面的光源位置信息。
本发明实施例的又一方面提供一种电子设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权上述的全景视频中虚拟对象的光影渲染方法。
进一步地,所述电子设备为手机,所述手机包括具有显示屏的前面板和手机后盖;所述手机后盖的中下部设有一凹陷区,所述凹陷区内安装有与所述凹陷区形状尺寸相适配的支撑板;所述支撑板的上端与所述凹陷区铰接,以使所述支撑板能够转动至与所述手机后盖呈一预设角度;所述支撑板的下端设置有一连接件,所述连接件一端与所述支撑板软连接,另一端具有与手机充电接口形状尺寸相适配的封堵部,所述封堵部与所述手机充电接口插接配合;
进一步的,所述支撑板包括第一板体、第二板体、第一连接板、第二连接板以及连杆,所述第一板体、第一连接板、第二连接板和第二板体依次连接形成一板体;所述第一板体具有相对的第一端和第二端,所述第二板体具有相对的第三端和第四端,所述连杆包括第一杆体和第二杆体,所述第一杆体相对两端分别与所述第一端和所述第三端相铰接,所述第二杆体相对两端分别与所述第二端和所述第四端相铰接,其中所述第一杆体和所述第二杆体均设置在所述支撑板远离所述前面板的一侧;所述第一连接板和所述第二连接板位于所述第一板体和所述第二板体之间,所述第一连接板一侧与所述第一板体一侧相铰接,所述第二连接板一侧与所述第二板体一侧相铰接,所述第一连接板另一侧与所述第二连接板另一侧相铰接,所述第一连接板和所述第二连接板设置在所述支撑板靠近所述前面板的一侧;所述第一板体设有沿其厚度方向设置的第一部分,所述第一部分位于所述第一板体与所述连杆的铰接处和所述第一板体与所述第一连接板的铰接处之间;所述第二板体设有沿其厚度方向设置的第二部分,所述第二部分位于所述第二板体与连杆的铰接处和所述第二板体与所述第二连接板的铰接处之间;所述第一部分、第 二部分、第一连接板、第二连接板以及连杆形成五杆体机构。
进一步地,所述凹陷区包括底壁和侧壁,所述底壁和/或所述侧壁上设有通风结构,所述通风结构为多个通风孔或通风格栅。
进一步地,所述连接件由橡胶制成。
进一步地,所述第一端、第二端、第三端和第四端均设有一凹陷部,所述第一杆体的相对两端分别置于所述第一端的凹陷部和所述第三端的凹陷部内,所述第二杆体相对两端分别置于所述第三端的凹陷部和所述第四端的凹陷部内。
由以上技术方案可见,本发明实施例提供的全景视频中虚拟对象的光影渲染方法、装置及电子设备,能够实时确定当前播放的全景视频中是否包括虚拟对象,以及根据视频中光源位置来生成虚拟对象的光影效果,从而使得虚拟对象的光影效果能够与视频中的场景保持一致。此外,本发明实施例提供地带有支撑板的手机,可以使得手机播放VR视频时更加平稳,用户不用长时间的手持手机观看,同时用户无需在手机上使用手机保护壳和粘贴部件,手机外观更加美观,散热更有效。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明实施例中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1为本发明一个实施例提供的全景视频中虚拟对象的光影渲染方法流程图;
图2为本发明一个实施例提供的全景视频中虚拟对象的光影渲染方法中步骤S101的流程图;
图3为本发明一个实施例提供的全景视频中虚拟对象的光影渲染装置结构图;
图4为本发明一个实施例提供的全景视频中虚拟对象的光影渲染装置结构图;
图5为执行本发明方法实施例提供的全景视频中虚拟对象的光影渲染方法的电子设备的硬件结构示意图;
图6为本发明一个实施例提供的用于全景视频中虚拟对象的光影渲染方法中获取目标图片的手机结构示意图;
图7为本发明一个实施例提供的用于全景视频中虚拟对象的光影渲染方法中获取目标图片的手机支撑板的爆炸图;
图8为本发明一个实施例提供的用于全景视频中虚拟对象的光影渲染方法中获取目标图片的手机支撑板的处于支撑状态下的示意图;
图9为图9的A部放大图;
图10为图9的B部放大图;
图11为本发明一个实施例提供的用于全景视频中虚拟对象的光影渲染方法中获取目标图片的手机支撑板的处于折叠状态下的示意图。
具体实施方式
为了使本领域的人员更好地理解本发明实施例中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明实施例一部分实施例,而不是全部的实施例。基于本发明实施例中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于本发明实施例保护的范围。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互结合。图1为本发明实施例提供的全景视频中虚拟对象的光影渲染方法流程图。如图1所示,本发明实施例提供的全景视频中虚拟对象的光影渲染方法,包括:
S101,判断全景视频在显示屏幕的当前画面是否包括预设的虚拟对象特征点。
在进行本步骤之前,可以预先在全景视频的目标帧画面上设置虚拟对象,记录所述目标帧画面的第一帧标识、所述代表所述虚拟对象关键特征的特征点的第一坐标以及所述第一坐标与所述第一帧标识的对应关系。
在实际应用中,一般会在全景视频中的某些场景设置虚拟对象,该虚拟对象与视频中的情节进行交互。每个场景一般对应着全景视频若干连续的视频帧,这些视频帧即为目标帧,帧标识可以是帧号或其他能够唯一标识该帧的特征信息。虚拟对象由若干特征点组成,这些特征点代表能够标识该虚拟对象的关键点,即通过特征点能够确定虚拟对象的外形轮廓和基本特征。所述虚拟对象的特征点的第一坐标为组成虚拟对象的若干特征点在全景视频中的坐标位置。
作为本发明实施例的可选实施方式,如图2所示,本步骤包括如下子步骤:
S1011,获取所述当前画面对应的第二帧标识。
在全景视频的播放过程中,实时获取或者每隔预设时间段获取当前画面对应的帧标识,也就是第二帧标识。
S1012,将所述第二帧标识与所述第一帧标识进行匹配,确定与所述第二帧标识匹配的目标第一帧标识,并根据所述对应关系确定所述目标第一帧标识对应的目标第一坐标。
在之前记录的目标帧画面的第一帧标识中进行查找,确定是否存在于第二帧标识相同的目标第一帧标识,若存在,则根据之前记录的对应关系,确定该目标第一帧 标识对应的目标第一坐标,这些目标第一坐标即为组成虚拟对象的特征点在目标帧画面上的位置,该虚拟对象即为目标第一帧标识所在的目标帧画面上设置的虚拟对象,这些特征点代表了虚拟对象的外形轮廓和基本特征。
S1013,按照预设模型将所述目标第一坐标转化为所述显示屏幕对应的目标第二坐标。
以一个特征点为例进行说明,假设该特征点的目标第一坐标为(x 1,y 1),可以将该特征点在二维的全景视频中的目标第一坐标(x 1,y 1)转换为其在显示屏幕中的目标第二坐标(x 2,y 2)。下面将具体描述如何按照预设模型将目标第一坐标(x 1,y 1)转化为目标第二坐标(x 2,y 2)。
目标第一坐标(x 1,y 1)在二维全景视频对应的球形视频源的坐标可以表示为(x′,y′,z′),其中,(x′,y′,z′)=(x 1,y 1)*M 1,M 1为将原二维的全景视频的坐标转换为球形视频源的坐标的转换矩阵,矩阵M 1属于本领域公知常识,这里不再赘述。根据透视投影原理可以先将球形视频源的该特征点的目标第一坐标(x′,y′,z′)经转换矩阵M 2转换为视平面坐标(x″,y″),表示为(x″,y″)=(x′,y′,z′)*M 2,在本发明中转换矩阵M 2与透视投影矩阵、球形视频源中的特征点与用户观看视频位置的相对位置以及球形视频源的球模型与用户观看视频位置的相对旋转角度相关,这里记为M 2=M 21*M 22*M 23,其中,M 21为球形视频源中的特征点与用户观看视频位置的相对位置矩阵,M 22为球形视频源的球模型相对于用户观看视频位置的旋转矩阵,M 23为投影矩阵。(x″,y″)经转换矩阵M 3以后可以转换为显示屏幕坐标即目标第二坐标(x 2,y 2),即(x 2,y 2)=(x 1,y 1)*M 1*M 2*M 3。M 3是一个与显示屏幕分辨率相关的矩阵,能够将视平面坐标转换为显示屏幕坐标。
不断重复上述内容,将组成虚拟对象的特征点的目标第一坐标均转化为目标第二坐标。
S1014,判断所述目标第二坐标是否处于所述显示屏幕坐标范围内。
由于全景视频是360度的视频,因此其仅仅是部分画面显示在显示屏幕中。在本步骤中,需要判断步骤S1013中得到的虚拟对象各特征点的目标第二坐标是否显示在显示屏幕上,假设屏幕分辨率为1920*1080,则0<x 2<1920,0<y 2<1080为其坐标范围。若组成虚拟对象的所有的特征点的目标第二坐标均不在该范围内,则说明显示屏幕上当前画面不会出现虚拟对象;若有一个或多个组成虚拟对象的特征点的目标第二坐标在该范围内,则执行步骤S1015,说明显示屏幕上当前画面会出现虚拟对象或虚拟对象的部分。
S1015,确定全景视频当前画面包括预设的虚拟对象特征点。
S102,将所述当前画面分解成第一预设数量的子图片。
在本步骤中,将当前画面分解成第一预设数量个由预设尺寸组成的小正方形网格元素,每个小正方形网格元素作为一张子图片,其中,所述子图片由第二预设数量 的像素组成。举例来说,可以把当前画面分解成C列R行(其中C、R都是整数,且C≥2,R≥2),得到多张子图片,每个子图片由p个像素组成。
S103,根据所述子图片的图像矩确定光强加权中心,基于所述子图片的所述光强加权中心和几何中心确定所述当前画面的光源位置信息。
首先,需要计算每一个子图片的图像矩,来确定其对应的光强加权中心。图像矩是一个从数字图形中计算出来的矩集,通常描述了该图像的全局特征,并提供了大量的关于该图像不同类型的几何特征信息,比如大小、位置、方向及形状等,例如:一阶矩与形状有关;二阶矩显示了曲线围绕直线平均值的扩展程度;三阶矩则是关于平均值的对称性测量;由二阶矩和三阶矩可以导出一组共七个不变矩,不变矩是图像的统计特征,满足平移、伸缩、旋转均不变的不变形,在图像处理中,几何不变矩可以作为一个重要的特征来表示物体,可以据此特征来对图像进行分类等操作。上述一阶矩、二阶矩、三阶矩以及由二阶矩和三阶矩导出的七个不变矩都有具体的计算公式,根据这些公式确定光强加权中心属于本领域的技术常识,本发明在此不做赘述。通过图像矩确定光强加权中心g,其在子图像中的坐标位置为(x g,y g)。
其次,对于每个子图片,根据其光强加权中心与几何中心确定所述子图片的光线角度。具体地,确定子图片中几何中心c的坐标位置(x c,y c),并将该几何中心c的坐标位置与光强加权中心g的坐标位置进行比较,从几何中心c到加权中心g的向量
Figure PCTCN2018099636-appb-000001
的方向即是受到光线影响的结果,为我们提供了该子图片所在当前画面位置的局部光效指示。具体地,向量
Figure PCTCN2018099636-appb-000002
代表光线的方向,d代表向量cg的模
Figure PCTCN2018099636-appb-000003
α代表子图片的光线角度,由于tanα=(y g-y c)/(x g-x c),则α=Arctan((y g-y c)/(x g-x c)),即通过几何中心和加权中心的坐标计算得到该子图片的光线角度。
再次,确定各子图片对应的权重值,根据权重值对各子图片的光线角度的向量进行加权求和,得到当前画面对应的光线角度。具体地,可以根据每个子图片对应的光线角度的向量模、和/或光线角度、和/或之前的经验确定每个子图片对应的权重值。
根据所述权重值确定各子图片的光线角度的向量的长度,之后再对每个长度确定的向量进行加和,得到当前画面对应的光线角度的向量,根据当前画面对应的光线角度的向量,即可得到当前画面对应的光线角度。
最后,根据上述光线角度和当前画面的像素值确定所述当前画面的光源位置信息。具体地,获取当前画面中沿着光线角度以及光线角度附近的各点的像素值,从而得到沿着光线角度的亮度分布情况。由于像素点值越趋近于零,像素点的颜色越深;而像素点的越趋近于最大值255,则表示该像素点的颜色越亮,因此,可以通过对比这些点的像素值,将其中像素值最大的点推断为光源的坐标位置,可选地,可以将光源的坐标记作(x l,y l)。
S104,基于所述虚拟对象特征点确定所述虚拟对象的轮廓信息和位置信息。
具体地,根据处于所述显示屏幕坐标范围内的目标第二坐标,将这些特征点连 接起来,确定所述虚拟对象的轮廓信息;再根据所述轮廓信息确定所述虚拟对象在所述显示屏幕的中心坐标,将所述中心坐标作为所述虚拟对象在显示屏幕的当前画面中的位置。
S105,根据所述光源位置信息和所述虚拟对象的轮廓信息、位置信息,生成所述虚拟对象的光影效果。
在本步骤中,根据光源位置和虚拟对象的位置能够确定阴影显示的位置,根据虚拟对象的轮廓信息来得到阴影的显示形状,两者结合即生成了虚拟对象的光影效果。
具体地,首先,在当前画面中虚拟对象周围的环境(例如周围的建筑物、周围设置的物品等)均会对其阴影形状产生影响,因此可以预先获取虚拟对象周围的环境信息,例如周围环境与虚拟对象的相对位置、周围对象的形状信息等,根据这些信息确定对应的阴影环境调整因子,用于对虚拟对象的阴影形状进行调整。
其次,基于所述轮廓信息获取虚拟对象的全等图形,根据所述阴影环境调整因子对所述全等图形进行处理,得到虚拟对象的阴影形状。
最后,根据所述阴影的中心位置和所述阴影形状生成所述虚拟对象的光影效果。具体地,首先确定阴影形状的中心点,将该阴影形状的中心点放在虚拟对象在所述显示屏幕的中心坐标,从而确定了阴影在当前画面中的位置,生成了虚拟对象的光影效果。
本发明实施例提供的全景视频中虚拟对象的光影渲染方法,能够实时确定当前播放的全景视频中是否包括虚拟对象,以及根据视频中光源位置来生成虚拟对象的光影效果,从而使得虚拟对象的光影效果能够与视频中的场景保持一致。
图3为本发明实施例提供的全景视频中虚拟对象的光影渲染装置结构图。如图3所示,该装置具体包括:判断模块100、分解模块200、第一确定模块300、第二确定模块400和生成模块500。其中,
判断模块100,用于判断全景视频在显示屏幕的当前画面是否包括预设的虚拟对象特征点;分解模块200,用于若包括所述虚拟对象特征点,将将所述当前画面分解成第一预设数量的子图片;第一确定模块300,用于根据所述子图片的图像矩确定光强加权中心,基于所述子图片的所述光强加权中心和几何中心确定所述当前画面的光源位置信息;第二确定模块400,用于基于所述虚拟对象特征点确定所述虚拟对象的轮廓信息和位置信息;生成模块500,用于根据所述光源位置信息和所述虚拟对象的轮廓信息、位置信息,生成所述虚拟对象的光影效果。
本发明实施例提供的全景视频中虚拟对象的光影渲染装置具体用于执行图1所示实施例提供的所述方法,其实现原理、方法和功能用途等与图1所示实施例类似, 在此不再赘述。
图4为本发明实施例提供的全景视频中虚拟对象的光影渲染装置结构图。如图4所示,该装置具体包括:记录模块600、判断模块100、分解模块200、第一确定模块300、第二确定模块400和生成模块500。其中,
记录模块600,用于预先在全景视频的目标帧画面上设置虚拟对象,记录所述目标帧画面的第一帧标识、所述代表所述虚拟对象关键特征的特征点的第一坐标以及所述第一坐标与所述第一帧标识的对应关系;判断模块100,用于判断全景视频在显示屏幕的当前画面是否包括预设的虚拟对象特征点;分解模块200,用于若包括所述虚拟对象特征点,将将所述当前画面分解成第一预设数量的子图片;第一确定模块300,用于根据所述子图片的图像矩确定光强加权中心,基于所述子图片的所述光强加权中心和几何中心确定所述当前画面的光源位置信息;第二确定模块400,用于基于所述虚拟对象特征点确定所述虚拟对象的轮廓信息和位置信息;生成模块500,用于根据所述光源位置信息和所述虚拟对象的轮廓信息、位置信息,生成所述虚拟对象的光影效果。
可选地,判断模块100包括:获取单元110、匹配单元120、转化单元130、判断单元140和确定单元150。其中,
获取单元110,用于获取所述当前画面对应的第二帧标识;匹配单元120,用于将所述第二帧标识与所述第一帧标识进行匹配,确定与所述第二帧标识匹配的目标第一帧标识,并根据所述对应关系确定所述目标第一帧标识对应的目标第一坐标;转化单元130,用于按照预设模型将所述目标第一坐标转化为所述显示屏幕对应的目标第二坐标;判断单元140,用于判断所述目标第二坐标是否处于所述显示屏幕坐标范围内;确定单元150,用于若处于所述显示屏幕坐标范围内,确定全景视频当前画面包括预设的虚拟对象特征点。
可选地,第二确定模块400还用于,根据处于所述显示屏幕坐标范围内的目标第二坐标,确定所述虚拟对象的轮廓信息;根据所述轮廓信息确定所述虚拟对象在所述显示屏幕的中心坐标,将所述中心坐标作为所述虚拟对象的位置信息。
可选地,第一确定模块300还用于,根据所述子图片的光强加权中心与所述子图片的几何中心确定所述子图片的光线角度;确定各所述子图片对应的权重值;根据所述权重值对各所述子图片的光线角度的向量进行加权求和,得到所述当前画面对应的光线角度;根据所述光线角度和所述当前画面的像素值确定所述当前画面的光源位置信息。
本发明实施例提供的全景视频中虚拟对象的光影渲染装置具体用于执行图1和图2所示实施例提供的所述方法,其实现原理、方法和功能用途和图1和图2所示实施例类似,在此不再赘述。
上述这些本发明实施例的全景视频中虚拟对象的光影渲染装置可以作为其中一个软件或者硬件功能单元,独立设置在上述电子设备中,也可以作为整合在处理器中的其中一个功能模块,执行本发明实施例的全景视频中虚拟对象的光影渲染方法。
图5为执行本发明方法实施例提供的全景视频中虚拟对象的光影渲染方法的电子设备的硬件结构示意图。根据图5所示,该电子设备包括:
一个或多个处理器510以及存储器520,图5中以一个处理器510为例。执行所述的全景视频中虚拟对象的光影渲染方法的设备还可以包括:输入装置530和输出装置530。
处理器510、存储器520、输入装置530和输出装置540可以通过总线或者其他方式连接,图5中以通过总线连接为例。
存储器520作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本发明实施例中的所述全景视频中虚拟对象的光影渲染方法对应的程序指令/模块。处理器510通过运行存储在存储器520中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现所述全景视频中虚拟对象的光影渲染方法。
存储器520可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据本发明实施例提供的全景视频中虚拟对象的光影渲染装置的使用所创建的数据等。此外,存储器520可以包括高速随机存取存储器520,还可以包括非易失性存储器520,例如至少一个磁盘存储器520件、闪存器件、或其他非易失性固态存储器520件。在一些实施例中,存储器520可选包括相对于处理器55远程设置的存储器520,这些远程存储器520可以通过网络连接至所述全景视频中虚拟对象的光影渲染装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置530可接收输入的数字或字符信息,以及产生与全景视频中虚拟对象的光影渲染装置的用户设置以及功能控制有关的键信号输入。输入装置530可包括按压模组等设备。
所述一个或者多个模块存储在所述存储器520中,当被所述一个或者多个处理器510执行时,执行所述全景视频中虚拟对象的光影渲染方法。
本发明实施例的电子设备以多种形式存在,包括但不限于:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类终端包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。
(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC设备等,例如iPad。
(3)服务器。
具体的,本实施例中的电子设备可以是一种带有支撑结构的手机,参见图6-11,该手机可以用于执行上述实施例中的全景视频中虚拟对象的光影渲染方法,并播放VR全景视频。该手机通过在后盖设置支撑板,可以令手机平稳支撑,观看视频时足够稳定,同时避免了用户使用带支架的手机保护壳和外粘的支架,方便手机散热,提高手机美观度,解决了用户长时间手持手机导致的疲劳问题。
如图6所示,所述手机包括具有显示屏的前面板(图中未示出)和手机后盖1000;所述手机后盖1000的中下部设有一凹陷区1100,所述凹陷区1100内安装有与所述凹陷区1100形状尺寸相适配的支撑板2000;所述支撑板2000的上端与所述凹陷区1100相铰接,以使所述支撑板2000能够转动至与所述手机后盖1000呈一预设角度,具体的,本实施例的支撑板2000的上端可以具有转轴,凹陷部的上端具有与转轴相适配的轴孔,通过转轴和轴孔的配合以实现支撑板2000的可转动,当然其他结构较简单的转动机构也在本实施例的可选范围之内。
另外,为了实现支撑板2000下端与手机的可拆卸连接,本实施例在所述支撑板2000的下端设置有一连接件3000,所述连接件3000一端与所述支撑板2000软连接,另一端具有与手机充电接口4000形状尺寸相适配的封堵部310,所述封堵部310与所述手机充电接口4000插接配合,该结构不仅实现了支撑板2000下端与手机的可拆卸连接,而且还能对充电接口4000起到保护作用,提高了支撑板2000的功能性。具体的,本实施例的所述连接件3000由橡胶制成,采用橡胶不仅具有良好的形变能力,而且成本较低,易于实现。
本实施例在手机后盖的凹陷区内设有可转动的支撑板,可以对手机起到侧立支撑作用,满足用户的使用要求,无需在手机上套设手机壳,避免了手机壳对手机散热的影响。而支撑板的下端通过与手机充电接口的连接实现自身的固定,而且还能够起到保护充电接口的作用,提升了支撑板的功能性,另外,本实施例的支撑板的展开也十分方便,仅需要通过将封堵部从手机充电接口拔出即可。
另外,现有的支撑板无法起到固定手机的作用。因此发明人对上述支撑板结构做出了进一步改进。
结合附图7-11所示,本实施例的所述支撑板2000具体包括第一板体2100、第二板体2200、第一连接板2300、第二连接板2400以及连杆2500,所述第一板体2100、第一连接板2300、第二连接板2400和第二板体2200依次连接形成一板体;所述第一板体2100具有相对的第一端和第二端(即图示中的上端和下端),所述第二板体2200具有相对的第三端和第四端(即图示中的上端和下端),所述连杆2500包括第一杆体2510和第二杆体2520,所述第一杆体2510的相对两端分别与所述第一端和所述第三端相铰接,所述第二杆体2520相对两端分别与所述第二端和所述第四端相铰接,其中所述第一杆体2510和所述第二杆体2520均设置在所述支撑板远离所述前 面板的一侧。
需要说明的是,本实施例的连杆2500的数量也可以为一个,一个连杆2500的两端分别与第一端和第二端相连接,或者,一个连杆2500的两端分别与第三端和第四端相连接,又或者,一个连杆2500的一端与第一板体2100一侧的中间相连,另一端与第二板体2200一侧的中间相连,此时,第一连接板2300和第二连接板2400为分体结构,即第一连接板2300和第二连接板2400均由两个板状件构成,一个连杆2500的上方为第一连接板2300和第二连接板2400的一个板状件,连杆2500的下方为第一连接板2300和第二连接板2400的另一板状件。
具体的,本实施例的所述第一连接板2300和所述第二连接板2400位于所述第一板体2100和所述第二板体2200之间,所述第一连接板2300一侧与所述第一板体2100一侧相铰接,所述第二连接板2400一侧与所述第二板体2200一侧相铰接,所述第一连接板2300另一侧与所述第二连接板2400另一侧相铰接,其中所述第一连接板2300和所述第二连接板2400设置在所述支撑板2000靠近所述前面板的一侧。
如图9-11所示,本实施例的所述第一板体2100设有沿其厚度方向设置的第一部分2110,所述第一部分2110位于所述第一板体2100与所述连杆2500的铰接处和所述第一板体2100与所述第一连接板2300的铰接处之间;所述第二板体2200设有沿其厚度方向设置的第二部分2210,所述第二部分2210位于所述第二板体2200与连杆2500的铰接处和所述第二板体2200与所述第二连接板2400的铰接处之间;所述第一部分2110、第二部分2210、第一连接板2300、第二连接板2400以及连杆2500形成五杆体机构。
上述结构通过第一部分2110、第二部分2210、第一连接板2300、第二连接板2400以及连杆2500形成五杆体机构,在需要支撑时,将整个支撑板2000结构转动至一定角度,而由于第一杆体2510和第二杆体2520均设置在支撑板2000远离所述前面板的一侧,第一连接板2300和第二连接板2400设置在支撑板2000靠近所述前面板的一侧,因此在支撑板2000支撑面给到向远离前面板方向的力时,五连杆2500机构处于第一死点位置,第一板体2100和第一板体2100、第一连接板2300和第二连接板2400无法转动,只能通过向靠近前面板方向的外力打开死点位置,能够保证支撑板2000始终为一个板体结构,对手机起到稳定的支撑作用;而当通过向靠近前面板方向的外力打开死点位置时,可以将第一连板体和第二板体2200折叠,此时第一连接板2300和第一连接板2300转动至第二死点位置,第一连接板2300和第二连接板2400可以形成一个手指收容部,用于将手指插入到手指收容部内,可以将手机固定在人手上,避免用户在使用过程中出现拥挤和碰撞,而造成手机摔落,该结构的支撑板2000不仅起到支撑作用,还起到手机的固定作用,大大提高了其功能性。
需要说明的是,上述的两个构件之间“相铰接”的实现方式可以是在一个构件上设有轴孔,另一构件上设有与所述轴孔相配合的转轴,转轴和轴孔共同形成一个转 动机构,以使得两个构件可以相对转动。
有利的,可以将第一连接板2300和第二连接板2400的材质选用为具有一定形变能力的材料,例如软质塑料,在第一连接板2300和第二连接板2400形成一个手指收容部时,手指插入其中可以提高舒适度。
再结合附图8所示,本实施例的所述第一端、第二端、第三端和第四端均设有一凹陷部2600,所述第一杆体2510的相对两端分别置于所述第一端的凹陷部2600和所述第三端的凹陷部2600内,所述第二杆体2520相对两端分别置于所述第三端的凹陷部2600和所述第四端的凹陷部2600内。
上述结构将连杆2500机构通过凹陷部设置在板体内,使得连杆2500机构和板体及连接板共同形成一个板体结构,提高了整个支撑板2000的整体性,便于支撑板2000的安装。
更有利的,本实施例还对凹陷区1100的结构进行改进,所述凹陷区1100具体包括底壁和侧壁,所述底壁上设有通风结构1110,所述通风结构1110可以是多个通风孔,通风结构1110也可以是在所述底壁设有通风格栅。通风孔和通风格栅的设计可以在支撑板2000处于支撑状态或者折叠状态下,加强手机的散热效果,进而延长了手机的使用寿命。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
本发明实施例提供了一种非暂态计算机可读存存储介质,所述计算机存储介质存储有计算机可执行指令,其中,当所述计算机可执行指令被电子设备执行时,使所述电子设备上执行上述任意方法实施例中的全景视频中虚拟对象的光影渲染方法。
本发明实施例提供了一种计算机程序产品,其中,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,其中,当所述程序指令被电子设备执行时,使所述电子设备执行上述任意方法实施例中的全景视频中虚拟对象的光影渲染方法。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,所述计算机可读记录介质包括用于以计算机(例如计算机)可读的形式存储或传送信息的任何机制。例如,机器可读介质包括只读存储器(ROM)、随机存取存储器(RAM)、磁盘存储介质、光存储介质、闪速存储介质、电、光、声或其他形式的传播信号(例如,载 波、红外信号、数字信号等)等,该计算机软件产品包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明实施例的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (10)

  1. 一种全景视频中虚拟对象的光影渲染方法,其特征在于,包括:
    判断全景视频在显示屏幕的当前画面是否包括预设的虚拟对象特征点;
    若包括所述虚拟对象特征点,将所述当前画面分解成第一预设数量的子图片;
    根据所述子图片的图像矩确定光强加权中心,基于所述子图片的所述光强加权中心和几何中心确定所述当前画面的光源位置信息;
    基于所述虚拟对象特征点确定所述虚拟对象的轮廓信息和位置信息;
    根据所述光源位置信息和所述虚拟对象的轮廓信息、位置信息,生成所述虚拟对象的光影效果。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    预先在全景视频的目标帧画面上设置虚拟对象,记录所述目标帧画面的第一帧标识、所述代表所述虚拟对象关键特征的特征点的第一坐标以及所述第一坐标与所述第一帧标识的对应关系。
  3. 根据权利要求2所述的方法,其特征在于,所述判断全景视频当前画面是否包括预设的虚拟对象特征点,包括:
    获取所述当前画面对应的第二帧标识;
    将所述第二帧标识与所述第一帧标识进行匹配,确定与所述第二帧标识匹配的目标第一帧标识,并根据所述对应关系确定所述目标第一帧标识对应的目标第一坐标;
    按照预设模型将所述目标第一坐标转化为所述显示屏幕对应的目标第二坐标;
    判断所述目标第二坐标是否处于所述显示屏幕坐标范围内;
    若处于所述显示屏幕坐标范围内,确定全景视频当前画面包括预设的虚拟对象特征点。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述虚拟对象特征点确定所述虚拟对象的轮廓信息和位置信息,包括:
    根据处于所述显示屏幕坐标范围内的目标第二坐标,确定所述虚拟对象的轮廓信息;
    根据所述轮廓信息确定所述虚拟对象在所述显示屏幕的中心坐标,将所述中心坐标作为所述虚拟对象的位置信息。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述基于所述子图片的所述光强加权中心和几何中心确定所述当前画面的光源位置信息,包括:
    根据所述子图片的光强加权中心与所述子图片的几何中心确定所述子图片的光线角度;
    确定各所述子图片对应的权重值;
    根据所述权重值对各所述子图片的光线角度的向量进行加权求和,得到所述当 前画面对应的光线角度;
    根据所述光线角度和所述当前画面的像素值确定所述当前画面的光源位置信息。
  6. 一种全景视频中虚拟对象的光影渲染装置,其特征在于,包括:
    判断模块,用于判断全景视频在显示屏幕的当前画面是否包括预设的虚拟对象特征点;
    分解模块,用于若包括所述虚拟对象特征点,将将所述当前画面分解成第一预设数量的子图片;
    第一确定模块,用于根据所述子图片的图像矩确定光强加权中心,基于所述子图片的所述光强加权中心和几何中心确定所述当前画面的光源位置信息;
    第二确定模块,用于基于所述虚拟对象特征点确定所述虚拟对象的轮廓信息和位置信息;
    生成模块,用于根据所述光源位置信息和所述虚拟对象的轮廓信息、位置信息,生成所述虚拟对象的光影效果。
  7. 根据权利要求6所述的装置,其特征在于,所述装置还包括:
    记录模块,用于预先在全景视频的目标帧画面上设置虚拟对象,记录所述目标帧画面的第一帧标识、所述代表所述虚拟对象关键特征的特征点的第一坐标以及所述第一坐标与所述第一帧标识的对应关系。
  8. 根据权利要求7所述的装置,其特征在于,所述判断模块包括:
    获取单元,用于获取所述当前画面对应的第二帧标识;
    匹配单元,用于将所述第二帧标识与所述第一帧标识进行匹配,确定与所述第二帧标识匹配的目标第一帧标识,并根据所述对应关系确定所述目标第一帧标识对应的目标第一坐标;
    转化单元,用于按照预设模型将所述目标第一坐标转化为所述显示屏幕对应的目标第二坐标;
    判断单元,用于判断所述目标第二坐标是否处于所述显示屏幕坐标范围内;
    确定单元,用于若处于所述显示屏幕坐标范围内,确定全景视频当前画面包括预设的虚拟对象特征点。
  9. 根据权利要求8所述的装置,其特征在于,所述第二确定模块还用于,根据处于所述显示屏幕坐标范围内的目标第二坐标,确定所述虚拟对象的轮廓信息;根据所述轮廓信息确定所述虚拟对象在所述显示屏幕的中心坐标,将所述中心坐标作为所述虚拟对象的位置信息。
  10. 一种电子设备,其特征在于,包括:至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少 一个处理器执行,以使所述至少一个处理器能够执行权利要求1至5中任一项所述的全景视频中虚拟对象的光影渲染方法。
PCT/CN2018/099636 2018-08-09 2018-08-09 全景视频中虚拟对象的光影渲染方法、装置及电子设备 WO2020029178A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/099636 WO2020029178A1 (zh) 2018-08-09 2018-08-09 全景视频中虚拟对象的光影渲染方法、装置及电子设备
CN201810975331.5A CN109064544A (zh) 2018-08-09 2018-08-24 全景视频中虚拟对象的光影渲染方法、装置及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/099636 WO2020029178A1 (zh) 2018-08-09 2018-08-09 全景视频中虚拟对象的光影渲染方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2020029178A1 true WO2020029178A1 (zh) 2020-02-13

Family

ID=64756096

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/099636 WO2020029178A1 (zh) 2018-08-09 2018-08-09 全景视频中虚拟对象的光影渲染方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN109064544A (zh)
WO (1) WO2020029178A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132938A (zh) * 2020-09-22 2020-12-25 上海米哈游天命科技有限公司 模型元素的形变处理、画面渲染方法、装置、设备及介质
CN114143561A (zh) * 2021-11-12 2022-03-04 北京中联合超高清协同技术中心有限公司 一种超高清视频多视角漫游播放方法
CN116824029A (zh) * 2023-07-13 2023-09-29 北京弘视科技有限公司 全息影像阴影生成的方法、装置、电子设备和存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110166760A (zh) * 2019-05-27 2019-08-23 浙江开奇科技有限公司 基于全景视频影像的影像处理方法及终端设备
CN113269863B (zh) * 2021-07-19 2021-09-28 成都索贝视频云计算有限公司 一种基于视频图像的前景物体阴影实时生成方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930513A (zh) * 2012-09-25 2013-02-13 北京航空航天大学 一种视频场景的虚实光照融合方法
US20160125642A1 (en) * 2014-10-31 2016-05-05 Google Inc. Efficient Computation of Shadows for Circular Light Sources
CN107749075A (zh) * 2017-10-26 2018-03-02 太平洋未来科技(深圳)有限公司 视频中虚拟对象光影效果的生成方法和装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7206442B1 (en) * 2001-11-16 2007-04-17 Rudolph Technologies, Inc. Optical inspection method utilizing ultraviolet light
US8405658B2 (en) * 2009-09-14 2013-03-26 Autodesk, Inc. Estimation of light color and direction for augmented reality applications
CN101710429B (zh) * 2009-10-12 2012-09-05 湖南大学 一种基于动态光照图的增强现实系统光照算法
CN102322839B (zh) * 2011-06-15 2013-05-22 上海理工大学 一种光学虚拟光源出光角度的测量装置和方法
CN103528540A (zh) * 2013-10-11 2014-01-22 河北科技大学 一种基于棱镜的焊接熔池传感单摄像机立体视觉成像装置
CN104050716B (zh) * 2014-06-25 2017-06-16 北京航空航天大学 一种海上多目标sar图像可视化建模方法
CN104316049B (zh) * 2014-10-28 2017-06-23 中国科学院长春光学精密机械与物理研究所 高精度低信噪比椭圆化星点光斑细分定位方法
CN204707145U (zh) * 2015-04-28 2015-10-14 江苏卡罗卡国际动漫城有限公司 一种带支撑板的读报手机
CN106504222B (zh) * 2016-11-21 2019-09-06 河海大学常州校区 一种基于仿生视觉机理的水下偏振图像融合系统
CN107749076B (zh) * 2017-11-01 2021-04-20 太平洋未来科技(深圳)有限公司 增强现实场景中生成现实光照的方法和装置
CN107747913B (zh) * 2017-11-15 2023-12-19 西安工业大学 一种管道弯曲度测量装置及方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930513A (zh) * 2012-09-25 2013-02-13 北京航空航天大学 一种视频场景的虚实光照融合方法
US20160125642A1 (en) * 2014-10-31 2016-05-05 Google Inc. Efficient Computation of Shadows for Circular Light Sources
CN107749075A (zh) * 2017-10-26 2018-03-02 太平洋未来科技(深圳)有限公司 视频中虚拟对象光影效果的生成方法和装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132938A (zh) * 2020-09-22 2020-12-25 上海米哈游天命科技有限公司 模型元素的形变处理、画面渲染方法、装置、设备及介质
CN112132938B (zh) * 2020-09-22 2024-03-12 上海米哈游天命科技有限公司 模型元素的形变处理、画面渲染方法、装置、设备及介质
CN114143561A (zh) * 2021-11-12 2022-03-04 北京中联合超高清协同技术中心有限公司 一种超高清视频多视角漫游播放方法
CN114143561B (zh) * 2021-11-12 2023-11-07 北京中联合超高清协同技术中心有限公司 一种超高清视频多视角漫游播放方法
CN116824029A (zh) * 2023-07-13 2023-09-29 北京弘视科技有限公司 全息影像阴影生成的方法、装置、电子设备和存储介质
CN116824029B (zh) * 2023-07-13 2024-03-08 北京弘视科技有限公司 全息影像阴影生成的方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN109064544A (zh) 2018-12-21

Similar Documents

Publication Publication Date Title
WO2020029178A1 (zh) 全景视频中虚拟对象的光影渲染方法、装置及电子设备
US11270419B2 (en) Augmented reality scenario generation method, apparatus, system, and device
US10878537B2 (en) Image splicing method, apparatus, terminal, and storage medium
Li et al. Building and using a scalable display wall system
JP6496093B1 (ja) 3次元360度バーチャルリアリティカメラの露出制御
CN108648257B (zh) 全景画面的获取方法、装置、存储介质及电子装置
WO2017088491A1 (zh) 一种视频的播放方法和装置
WO2019238114A1 (zh) 动态模型三维重建方法、装置、设备和存储介质
CN105306862A (zh) 一种基于3d虚拟合成技术的情景视频录制系统、方法及情景实训学习方法
US11776209B2 (en) Image processing method and apparatus, electronic device, and storage medium
CN104427230B (zh) 增强现实的方法和增强现实的系统
WO2022262618A1 (zh) 一种屏保交互方法、装置、电子设备和存储介质
CN110278368A (zh) 图像处理装置、摄影系统、图像处理方法
WO2020019132A1 (zh) 基于光线信息渲染虚拟对象的方法、装置及电子设备
WO2021170123A1 (zh) 视频生成方法、装置及对应的存储介质
WO2020019133A1 (zh) 阴影效果的确定方法、装置及电子设备
CN107743637B (zh) 用于处理外围图像的方法和设备
CN106530408A (zh) 博物馆临时展览规划设计的系统
CN112766215A (zh) 人脸融合方法、装置、电子设备及存储介质
WO2022042111A1 (zh) 视频图像显示方法、装置、多媒体设备以及存储介质
WO2022179087A1 (zh) 视频处理方法及装置
WO2022083118A1 (zh) 一种数据处理方法及相关设备
CN116630518A (zh) 一种渲染方法、电子设备及介质
CN106651759A (zh) 基于固定位置相机的vr场景优化方法及装置
CN113014960B (zh) 一种在线制作视频的方法、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18929077

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18929077

Country of ref document: EP

Kind code of ref document: A1