CN118135090A - Grid alignment method and device and electronic equipment - Google Patents

Grid alignment method and device and electronic equipment Download PDF

Info

Publication number
CN118135090A
CN118135090A CN202211542593.5A CN202211542593A CN118135090A CN 118135090 A CN118135090 A CN 118135090A CN 202211542593 A CN202211542593 A CN 202211542593A CN 118135090 A CN118135090 A CN 118135090A
Authority
CN
China
Prior art keywords
grid
point
target object
point cloud
aligned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211542593.5A
Other languages
Chinese (zh)
Inventor
范帝楷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211542593.5A priority Critical patent/CN118135090A/en
Publication of CN118135090A publication Critical patent/CN118135090A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a grid alignment method, a grid alignment device and electronic equipment. A grid alignment method comprising: determining a target point cloud corresponding to a target object; the target object is an alignment object of a grid to be aligned, and the grid to be aligned is a scene model corresponding to a scene where the target object is currently located; determining grid points corresponding to each point in the target point cloud in the grid to be aligned; and updating the radius value of the corresponding grid point based on each point in the target point cloud so as to align the grid to be aligned with the target object. The grid alignment method, the grid alignment device and the electronic equipment realize the alignment of the grids, so that the aligned grids are attached to the actual positions of the target objects, and MR interaction experience of users is improved.

Description

Grid alignment method and device and electronic equipment
Technical Field
The disclosure relates to the technical field of three-dimensional reconstruction, and in particular relates to a grid alignment method, a grid alignment device and electronic equipment.
Background
With the development of VR (Virtual Reality) technology, MR (MEDIATED REALITY ) applications based on VR headset have also been developed accordingly.
In MR applications, a see-through (see-through) function is required, which enables to see the real world through the head-mounted device. Therefore, when the seethrough is started, the three-dimensional reconstruction is performed on the scene currently seen by the user, so as to obtain the grid corresponding to the scene, namely the three-dimensional scene model.
Disclosure of Invention
This disclosure is provided in part to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The embodiment of the disclosure provides a grid alignment method, a grid alignment device and electronic equipment, which can realize grid alignment, so that the aligned grids are attached to the actual positions of target objects, and MR interaction experience of users is improved.
In a first aspect, an embodiment of the present disclosure provides a grid alignment method, including: determining a target point cloud corresponding to a target object; the target object is an alignment object of a grid to be aligned, and the grid to be aligned is a scene model corresponding to a scene where the target object is currently located; determining grid points corresponding to each point in the target point cloud in the grid to be aligned; and updating the radius value of the corresponding grid point based on each point in the target point cloud so as to align the grid to be aligned with the target object.
In a second aspect, embodiments of the present disclosure provide a grid alignment apparatus, including: the determining unit is used for determining a target point cloud corresponding to the target object; the target object is an alignment object of a grid to be aligned, and the grid to be aligned is a scene model corresponding to a scene where the target object is currently located; the determining unit is further configured to: determining grid points corresponding to each point in the target point cloud in the grid to be aligned; and the alignment unit is used for updating the radius value of the corresponding grid point based on each point in the target point cloud so as to align the grid to be aligned with the target object.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; and storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the grid alignment method as described in the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the steps of the grid alignment method according to the first aspect.
The grid alignment method, the grid alignment device and the electronic equipment provided by the embodiment of the disclosure determine the target point cloud corresponding to the target object, wherein the target point cloud can be used for representing the position of an alignment object of the grid to be aligned and a scene model corresponding to the current scene of the target object. Therefore, the radius value of the corresponding grid point in the grid to be aligned is updated based on the target point cloud, so that the grid to be aligned is aligned with the target object, which is equivalent to the alignment optimization of the grid based on the position of the target object. And further, the aligned grids are attached to the actual positions of the target objects, namely, the scene model displayed at present is attached to the actual positions of the target objects, so that MR interaction experience of a user is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of one embodiment of a grid alignment method according to the present disclosure;
FIG. 2 is a flow chart of another embodiment of a grid alignment method according to the present disclosure;
FIG. 3 is a schematic structural view of one embodiment of a grid alignment device according to the present disclosure;
FIG. 4 is an exemplary system architecture to which the grid alignment method of one embodiment of the present disclosure may be applied;
fig. 5 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The technical scheme of the embodiment of the disclosure can be applied to MR application scenes or other application scenes related to see through functions.
In these application scenarios, when seethrough is started, a three-dimensional reconstruction is performed on a scenario currently seen by a user, and a three-dimensional scenario model is generated, and the generated three-dimensional scenario model is referred to as a grid in the embodiments of the present disclosure.
In some embodiments, MR is implemented based on a headset. After the user wears the head-mounted device, the views of the constructed three-dimensional scene model are respectively projected under the coordinate systems of two eyeballs, namely a left eye image and a right eye image, so that the user can feel the feeling of being put in the real scene.
In the related art, due to limited calculation power, seethrough is often difficult to recover particularly accurate three-dimensional geometric information, and only one approximate three-dimensional layout can be recovered.
For example, when a multiplayer game is played based on MR application, an accurate geometric position is required, but due to the lower precision of seethrough, the positions given by teammates (enemy) seen by human eyes and actual multimachine positioning cannot be aligned completely, so that aiming is difficult when an attack is launched, and the superimposed virtual special effects are prone to superimposed dislocation. Thus, the MR interaction experience of the user is poor.
Based on this, the embodiment of the disclosure provides a grid alignment scheme, which considers the problem of low precision of the see through function and performs alignment optimization on the three-dimensional grid reconstructed in the see through function. And the grid is aligned and optimized based on the actual position of the target object, so that the aligned and optimized grid is attached to the actual position of the target object, and MR interaction experience of a user is improved.
For example, when the target object is an enemy, by the alignment scheme, the enemy seen by the user through the reconstructed scene (i.e. the aligned grid) can be attached to the actual location of the enemy, so that the enemy can be better aimed when the attack is launched. And on the basis of improving the precision, the display effect of the superimposed virtual special effect is correspondingly improved. Thus, the MR interaction experience of the user is poor.
Referring to fig. 1, a flow of one embodiment of a grid alignment method according to the present disclosure is shown. The grid alignment method may be applied to a terminal device, which may be a VR headset. The grid alignment method as shown in fig. 1 includes the steps of:
Step 101, determining a target point cloud corresponding to a target object.
In some embodiments, the target object is an alignment object of a grid to be aligned, and the grid to be aligned is a scene model corresponding to a scene in which the target object is currently located.
The mesh to be aligned may be understood as an initial mesh constructed based on the see through function, which requires further alignment processing due to the accuracy problem of the see through function.
The target object may be an interaction object of a current user (a user wearing the VR headset), where a scene model corresponding to a scene where the interaction object is located is a grid to be aligned, that is, a grid that the current user sees through the VR headset.
Taking a multiplayer game scenario as an example, the target object may be the enemy or friend of the current user (the user wearing the VR headset); the grid to be aligned may be a scene model corresponding to a scene where the enemy or friend is located, that is, a scene model that the VR headset presents to the current user, or a scene that the current user sees through the VR headset.
In order for a user to see an enemy or friend in a scene, to fit the actual location of the enemy or friend, it is necessary to align the reconstructed scene with the actual location of the enemy or friend, and therefore, the enemy or friend may be referred to as an alignment object of the grid to be aligned.
In some embodiments, the target point cloud corresponding to the target object may be understood as a plurality of point sets corresponding to the target object, which may be used to characterize the target object and the position of the target object.
In some embodiments, the target point cloud may be determined by performing a point cloud sampling of the target object; in other embodiments, the target point cloud may also obtain the initial point cloud by first performing point cloud sampling on the target object; and then obtaining a target point cloud based on the initial point cloud.
Thus, as an alternative embodiment, step 101 comprises: acquiring an initial point cloud of a target object; determining a first projection point of each point in the initial point cloud on the left-eye image, and determining a second projection point of each point in the initial point cloud on the right-eye image; and carrying out eliminating processing on points in the initial point cloud according to the first projection points and the second projection points, and determining the point cloud after eliminating processing as a target point cloud.
In some embodiments, obtaining an initial point cloud of a target object includes: generating a simulation target object based on the real-time position of the target object, the preset target object height and the preset target object width; and performing point cloud sampling on the simulated target object to obtain an initial point cloud.
In some embodiments, the real-time location of the target object may be obtained from a VR headset worn by the target object.
Taking a multiplayer game scenario as an example, multiple users participating in a multiplayer game all need to wear VR headsets, and the multiple VR headsets can communicate with each other. Therefore, for any VR headset, the real-time position information synchronized by other VR headsets can be obtained, and the real-time position information of the VR headset can be synchronized to other VR headsets.
The preset target object height may be a height of the VR headset from the ground. The preset target object height can be automatically calibrated after the user wears the VR headset, so that the target object height can be used as preset information.
The target object width is preset, and can be set in combination with the target object, for example: for a person, the preset target width may be 75cm.
Based on the real-time position, the preset target object height and the preset target object, a simulation target object can be positioned, the simulation target object is equivalent to a paper sheet, and the position of the paper sheet is synchronously updated in real time.
Further, the point cloud sampling is performed on the simulation target object, and an initial point cloud can be obtained.
In some embodiments, a predetermined number of points are uniformly sampled on the simulation target object, the predetermined number of points constituting the initial point cloud. For example, the preset number may be: 50 x 50, i.e. sampled 50 times uniformly in the height and width directions.
In some embodiments, the initial point cloud acquired here may be directly taken as the target point cloud.
In other embodiments, a first projected point of each point in the initial point cloud on the left-eye image is determined, and a second projected point of each point in the initial point cloud on the right-eye image is determined.
As an optional implementation manner, for any point in the initial point cloud, the first projection point is determined based on the left eye rotation parameter corresponding to the point, the left eye translation parameter corresponding to the point and a preset projection model; the second projection point is determined based on the right eye rotation parameter corresponding to the point, the right eye translation parameter corresponding to the point and a preset projection model; the left eye rotation parameter, the left eye translation parameter, the right eye rotation parameter and the right eye translation parameter are determined based on the real-time gesture of the target object.
In some embodiments, the VR headset may synchronize a real-time pose, represented by a rotation parameter and a translation parameter, in addition to synchronizing the real-time position of the target object. Thus, based on the real-time pose of the target object, a left eye rotation parameter, a left eye translation parameter, a right eye rotation parameter, and a right eye translation parameter may be determined.
And, the rotation parameter may be a quaternion and the translation parameter may be a vector.
The preset projection model may be understood as a projection model of the image acquisition device. In some embodiments, the projection models corresponding to the left-eye camera and the right-eye camera may be the same or different. The projection model can be understood as a mapping function for mapping points onto an image.
Thus, as an alternative embodiment, for any one point in the initial point cloud, the projected point corresponding to that point may be expressed as: Wherein { O k } represents an initial point cloud, i=1 represents a left-eye image, i=1 represents a right-eye image, and R i,ti represents rotation parameters and translation parameters, respectively,/> The representative point P k is projected to the pixel coordinates (projection coordinates) of the image i.
Further, based on the first projection point corresponding to the left-eye image and the second projection point corresponding to the right-eye image, a point in the initial point cloud can be subjected to rejection processing, and the point cloud after the rejection processing is determined as the target point cloud.
As an optional implementation manner, for any point in the initial point cloud, determining an error value corresponding to the point according to the first projection point and the second projection point; and in response to detecting that the error value corresponding to the point is greater than the preset error value, eliminating the point from the initial point cloud.
In some embodiments, the preset error value may be an error value corresponding to a SAD (Sum of absolute differences, absolute value error) error, for example: the preset error value may be 10 pixels.
Thus, as an alternative embodiment, determining an error value corresponding to the first projection point and the second projection point from the point comprises: and determining an error value corresponding to the point based on the window size of the preset window, the window offset coordinate corresponding to the preset window, the first projection point and the second projection point.
Wherein, the preset window can be a window with the size of 5x 5; other window sizes are also possible. And presetting window offset coordinates of a window, and setting by combining the window size of the preset window. For example, for a window of 5×5 size, the offset amounts thereof include 25 in total, and the offset coordinates are different offset coordinates corresponding to different pixels according to the offset set by the window. The offsets corresponding to different pixels are the same, but the offset values are not the same.
The preset window and the corresponding offset coordinates may refer to the related art of SAD error, which is not described in detail herein. And, it can be appreciated that if other error values are employed, a corresponding error calculation method is employed.
Based on the preset window, the SAD error of the image block of the preset window size with the left-eye and right-eye images centered on the projection point can be calculated.
Thus, as an alternative embodiment, the error value is expressed as: Wherein e sad represents SAD error, |w| represents the size of a preset window, δ represents window offset coordinates, I and I are 0 representing left-eye images, and I are 1 representing right-eye images.
By the embodiment, the determination of the target point cloud based on the real-time position and the real-time gesture of the target object can be realized, so that the aligned grid can be attached to the real-time position and the real-time gesture of the target object after the grid is aligned based on the target point cloud.
After determining the errors corresponding to all points in the initial point cloud, comparing the errors of the points with preset errors, and if the error of one point is larger than the preset error, rejecting the point; if the error of a point is less than the preset error, the point is preserved.
Furthermore, the point cloud after the elimination processing, that is, the corresponding error of each point in the target point cloud is smaller than the preset error.
In some embodiments, the grid is only displayed after the perspective function is turned on; thus, as an alternative embodiment, step 101 is performed in response to detecting a perspective function on instruction.
The perspective function opening instruction may have different triggering modes, which is not limited in the embodiment of the present disclosure. For example: the user can start the perspective function through the voice function; for another example: after the multiplayer game starts for a preset period of time, a perspective function is turned on, and the like.
Step 102, determining grid points corresponding to each point in the target point cloud in the grids to be aligned.
In some embodiments, each point in the target point cloud is a corresponding grid point in the grid to be aligned, which may be understood as a point after each point is projected onto the grid to be aligned.
And, in some embodiments, the grid to be aligned is a hemisphere based on a hemispherical coordinate system, so that each point in the target point cloud is projected onto the hemisphere, and a corresponding grid point can be determined.
As an optional implementation manner, for any point in the target point cloud, the grid point corresponding to the point comprises a first coordinate and a second coordinate, wherein the first coordinate is a pitch angle of the point projected onto the grid to be aligned, and the second coordinate is a yaw angle of the point projected onto the grid to be aligned; the radius value of the grid point corresponding to the point is a radius value determined based on the coordinates of the point.
In some embodiments, pitch angle is expressed asYaw angle is expressed as/>Where x is the abscissa of the point and y is the ordinate of the point. Therefore, the coordinates of the grid point corresponding to the point are expressed as: (phi, theta).
In some embodiments, the radius value corresponding to the point is expressed as: Where z is the vertical coordinate of the point.
Accordingly, in step 102, the pitch angle and the yaw angle are determined based on the coordinates of each point in the target point cloud, respectively, and thus the grid point corresponding to each point can be determined.
And step 103, updating the radius value of the corresponding grid point based on each point in the target point cloud so as to align the grid to be aligned with the target object.
In some embodiments, since the mesh to be aligned is a hemispherical model, each mesh point corresponds to a radius value in the mesh to be aligned. The radius value of the grid point can be replaced by the radius value of the corresponding point in the target point cloud, so that the grid to be aligned is aligned with the target object.
Accordingly, the radius value corresponding to each point may be determined based on the coordinates of the point, and then replaced with the radius value of the corresponding grid point to achieve updating of the radius value of the corresponding grid point.
For example, assuming that there are a point a and a grid point B, where the corresponding grid point of the point a in the grid to be aligned is the grid point B, the original radius value of the grid point B is r1, and the radius value determined based on the point a is r2, the radius value of the grid point B is replaced by r1 to r2. The same applies to other grid points, and the corresponding radius value is used for replacing.
In some embodiments, the grid to be aligned is a smoothed grid.
As an alternative embodiment, the mesh is smoothed by a gaussian filter.
In other embodiments, the grid to be aligned may be aligned multiple times; and, before each alignment treatment, the current grid to be treated is smoothed.
Thus, as an alternative embodiment, the grid alignment method further comprises: before updating the radius value of the corresponding grid point each time, smoothing the grid to be aligned.
In some embodiments, the smoothing of the grid to be aligned may be performed before step 102 or before step 103.
Therefore, as an embodiment, smoothing processing is performed first, and then corresponding grid points are determined for the grid after the smoothing processing; radius value substitution is then performed based on the determined grid points. This process may be repeated multiple times, for example: repeated 3 times.
As another embodiment, the corresponding grid points are determined first, and then the smoothing process is performed; radius value substitution is performed on the mesh after smoothing based on the determined mesh point. This process may also be repeated multiple times, for example: repeated 3 times.
After repeating a number of times, the final aligned grid is output, and after the final aligned grid is projected onto the left and right eyes of the user, alignment with the target object can be achieved.
Compared with the related art, in the embodiment of the disclosure, a target point cloud corresponding to the target object is determined, and the target point cloud can be used for representing the position of the alignment object of the grid to be aligned and a scene model corresponding to the scene where the grid to be aligned is currently located. Therefore, the radius value of the corresponding grid point in the grid to be aligned is updated based on the target point cloud, so that the grid to be aligned is aligned with the target object, which is equivalent to the alignment optimization of the grid based on the position of the target object. And further, the aligned grids are attached to the actual positions of the target objects, namely, the scene model displayed at present is attached to the actual positions of the target objects, so that MR interaction experience of a user is improved.
With further reference to fig. 2, a flow chart of one implementation of the method shown in fig. 1 in a related application scenario is shown. In fig. 2, the input information includes: the position of the object, i.e. the position of the target object, needs to be aligned; ground height, i.e., a preset height calibrated by the headset; left and right eye images; the grid calculated through perspective, namely the grid to be aligned which is initially generated through perspective function.
Based on the position of an object to be aligned and the ground height, sampling the point cloud to obtain an initial point cloud; and performing outlier rejection based on the initial point cloud and the left and right eye images to obtain a target point cloud. Then, based on the target point cloud, spherical projection is performed, and corresponding grid points are determined. Then, grid optimization is carried out on the grids; and finally outputting the aligned grids.
With further reference to fig. 3, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of a grid alignment apparatus, which corresponds to the grid alignment method embodiment shown in fig. 1, and which is particularly applicable to various electronic devices.
As shown in fig. 3, the grid alignment apparatus of the present embodiment includes:
A determining unit 301, configured to determine a target point cloud corresponding to a target object; the target object is an alignment object of a grid to be aligned, and the grid to be aligned is a scene model corresponding to a scene where the target object is currently located; the determining unit 301 is further configured to: determining grid points corresponding to each point in the target point cloud in the grid to be aligned; an alignment unit 302, configured to update a radius value of the corresponding grid point based on each point in the target point cloud, so as to align the grid to be aligned with the target object.
In some embodiments, the determining unit 301 is further configured to: acquiring an initial point cloud of a target object; determining first projection points of all points in the initial point cloud on a left-eye image, and determining second projection points of all points in the initial point cloud on a right-eye image; and carrying out eliminating processing on points in the initial point cloud according to the first projection points and the second projection points, and determining the point cloud after the eliminating processing as the target point cloud.
In some embodiments, the determining unit 301 is further configured to: generating a simulation target object based on the real-time position of the target object, the preset target object height and the preset target object width; and performing point cloud sampling on the simulation target object to obtain the initial point cloud.
In some embodiments, for any one point in the initial point cloud, the first projection point is determined based on a left-eye rotation parameter corresponding to the point, a left-eye translation parameter corresponding to the point, and a preset projection model; the second projection point is determined based on the right eye rotation parameter corresponding to the point, the right eye translation parameter corresponding to the point and the preset projection model; wherein the left eye rotation parameter, the left eye translation parameter, the right eye rotation parameter, and the right eye translation parameter are determined based on the real-time pose of the target object.
In some embodiments, the determining unit 301 is further configured to: determining an error value corresponding to any point in the initial point cloud according to the first projection point and the second projection point; and in response to detecting that the error value corresponding to the point is greater than a preset error value, eliminating the point from the initial point cloud.
In some embodiments, the determining unit 301 is further configured to: and determining an error value corresponding to the point based on the window size of a preset window, window offset coordinates corresponding to the preset window, the first projection point and the second projection point.
In some embodiments, the grid to be aligned is a smoothed grid.
In some embodiments, the alignment unit 302 is further configured to: and before updating the radius value of the corresponding grid point each time, smoothing the grid to be aligned.
In some embodiments, for any one point in the target point cloud, the grid point corresponding to the point includes a first coordinate and a second coordinate, where the first coordinate is a pitch angle of the point projected onto the grid to be aligned, and the second coordinate is a yaw angle of the point projected onto the grid to be aligned; the radius value of the grid point corresponding to the point is a radius value determined based on the coordinates of the point.
Referring to fig. 4, fig. 4 illustrates an exemplary system architecture in which the grid alignment method of one embodiment of the present disclosure may be applied.
As shown in fig. 4, the system architecture may include terminal devices 401, 402, 403, a network 404, and a server 405. The network 404 may be used as a medium to provide communication links between the terminal devices 401, 402, 403 and the server 405. The network 404 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The terminal devices 401, 402, 403 may interact with the server 405 through the network 404 to receive or send messages or the like. Various client applications, such as a web browser application, a search class application, a news information class application, may be installed on the terminal devices 401, 402, 403. The client application in the terminal device 401, 402, 403 may receive the instruction of the user and perform the corresponding function according to the instruction of the user, for example, adding the corresponding information in the information according to the instruction of the user.
The terminal devices 401, 402, 403 may be hardware or software. When the terminal devices 401, 402, 403 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, electronic book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic video expert compression standard audio plane 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic video expert compression standard audio plane 4) players, laptop and desktop computers, and the like. When the terminal devices 401, 402, 403 are software, they can be installed in the above-listed electronic devices. Which may be implemented as multiple software or software modules (e.g., software or software modules for providing distributed services) or as a single software or software module. The present invention is not particularly limited herein.
The server 405 may be a server that provides various services, for example, receives information acquisition requests sent by the terminal devices 401, 402, 403, and acquires presentation information corresponding to the information acquisition requests in various ways according to the information acquisition requests. And related data showing the information is transmitted to the terminal devices 401, 402, 403.
It should be noted that, the grid alignment method provided by the embodiments of the present disclosure may be performed by the terminal device, and accordingly, the grid alignment apparatus may be provided in the terminal devices 401, 402, 403. In addition, the grid alignment method provided by the embodiment of the present disclosure may also be performed by the server 405, and accordingly, the grid alignment device may be disposed in the server 405.
It should be understood that the number of terminal devices, networks and servers in fig. 4 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to fig. 5, a schematic diagram of a configuration of an electronic device (e.g., a terminal device or server in fig. 4) suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 5, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the electronic device 600 are also stored. The processing device 501, the ROM502, and the RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining a target point cloud corresponding to a target object; the target object is an alignment object of a grid to be aligned, and the grid to be aligned is a scene model corresponding to a scene where the target object is currently located; determining grid points corresponding to each point in the target point cloud in the grid to be aligned; and updating the radius value of the corresponding grid point based on each point in the target point cloud so as to align the grid to be aligned with the target object.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not constitute a limitation of the unit itself in some cases, and the determination unit 301 may also be described as "a unit that determines a target point cloud corresponding to a target object", for example.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (14)

1. A grid alignment method, comprising:
determining a target point cloud corresponding to a target object; the target object is an alignment object of a grid to be aligned, and the grid to be aligned is a scene model corresponding to a scene where the target object is currently located;
determining grid points corresponding to each point in the target point cloud in the grid to be aligned;
and updating the radius value of the corresponding grid point based on each point in the target point cloud so as to align the grid to be aligned with the target object.
2. The grid alignment method of claim 1, wherein the target subject wears a headset, the grid alignment method further comprising:
Determining the real-time position of the target object according to the real-time position information of the head-mounted device; the real-time location is used to determine the target point cloud.
3. The grid alignment method of claim 1, wherein the target subject wears a headset, the grid alignment method further comprising:
determining the height of the target object according to preset information calibrated by the head-mounted equipment; the height of the target object is used to determine the target point cloud.
4. The grid alignment method according to claim 1, wherein the determining the target point cloud of the target object comprises:
acquiring an initial point cloud of a target object;
Determining first projection points of all points in the initial point cloud on a left-eye image, and determining second projection points of all points in the initial point cloud on a right-eye image;
and carrying out eliminating processing on points in the initial point cloud according to the first projection points and the second projection points, and determining the point cloud after the eliminating processing as the target point cloud.
5. The grid alignment method of claim 4, wherein the obtaining an initial point cloud of a target object comprises:
generating a simulation target object based on the real-time position of the target object, the preset target object height and the preset target object width;
And performing point cloud sampling on the simulation target object to obtain the initial point cloud.
6. The grid alignment method according to claim 4, wherein, for any one point in the initial point cloud, the first projection point is determined based on a left-eye rotation parameter corresponding to the point, a left-eye translation parameter corresponding to the point, and a preset projection model; the second projection point is determined based on the right eye rotation parameter corresponding to the point, the right eye translation parameter corresponding to the point and the preset projection model; wherein the left eye rotation parameter, the left eye translation parameter, the right eye rotation parameter, and the right eye translation parameter are determined based on the real-time pose of the target object.
7. The grid alignment method according to claim 4, wherein the performing a culling process on points in the initial point cloud according to the first projection point and the second projection point includes:
determining an error value corresponding to any point in the initial point cloud according to the first projection point and the second projection point;
and in response to detecting that the error value corresponding to the point is greater than a preset error value, eliminating the point from the initial point cloud.
8. The grid alignment method according to claim 7, wherein determining an error value corresponding to the first projection point and the second projection point according to the point comprises:
and determining an error value corresponding to the point based on the window size of a preset window, window offset coordinates corresponding to the preset window, the first projection point and the second projection point.
9. The grid alignment method according to claim 1, wherein the grid to be aligned is a smoothed grid.
10. The grid alignment method of claim 9, further comprising:
And before updating the radius value of the corresponding grid point each time, smoothing the grid to be aligned.
11. The grid alignment method according to claim 1, wherein, for any one point in the target point cloud, a grid point corresponding to the point includes a first coordinate and a second coordinate, the first coordinate is a pitch angle of the point projected onto the grid to be aligned, and the second coordinate is a yaw angle of the point projected onto the grid to be aligned;
The radius value of the grid point corresponding to the point is a radius value determined based on the coordinates of the point.
12. A grid alignment apparatus, comprising:
The determining unit is used for determining a target point cloud corresponding to the target object; the target object is an alignment object of a grid to be aligned, and the grid to be aligned is a scene model corresponding to a scene where the target object is currently located;
The determining unit is further configured to: determining grid points corresponding to each point in the target point cloud in the grid to be aligned;
And the alignment unit is used for updating the radius value of the corresponding grid point based on each point in the target point cloud so as to align the grid to be aligned with the target object.
13. An electronic device, comprising:
One or more processors;
Storage means for storing one or more programs,
When executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-11.
14. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1-11.
CN202211542593.5A 2022-12-02 2022-12-02 Grid alignment method and device and electronic equipment Pending CN118135090A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211542593.5A CN118135090A (en) 2022-12-02 2022-12-02 Grid alignment method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211542593.5A CN118135090A (en) 2022-12-02 2022-12-02 Grid alignment method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN118135090A true CN118135090A (en) 2024-06-04

Family

ID=91241124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211542593.5A Pending CN118135090A (en) 2022-12-02 2022-12-02 Grid alignment method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN118135090A (en)

Similar Documents

Publication Publication Date Title
CN118301261A (en) Special effect display method, device, equipment and medium
CN111050271B (en) Method and apparatus for processing audio signal
CN115690382B (en) Training method of deep learning model, and method and device for generating panorama
WO2022007627A1 (en) Method and apparatus for implementing image special effect, and electronic device and storage medium
CN109754464B (en) Method and apparatus for generating information
CN112766215B (en) Face image processing method and device, electronic equipment and storage medium
CN113344776B (en) Image processing method, model training method, device, electronic equipment and medium
CN110717467A (en) Head pose estimation method, device, equipment and storage medium
CN114049403A (en) Multi-angle three-dimensional face reconstruction method and device and storage medium
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
CN111818265B (en) Interaction method and device based on augmented reality model, electronic equipment and medium
CN109816791B (en) Method and apparatus for generating information
WO2023240999A1 (en) Virtual reality scene determination method and apparatus, and system
CN116563740A (en) Control method and device based on augmented reality, electronic equipment and storage medium
WO2023140787A2 (en) Video processing method and apparatus, and electronic device, storage medium and program product
CN117319790A (en) Shooting method, device, equipment and medium based on virtual reality space
CN118135090A (en) Grid alignment method and device and electronic equipment
CN114529452A (en) Method and device for displaying image and electronic equipment
CN112070903A (en) Virtual object display method and device, electronic equipment and computer storage medium
CN112991147B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN112668474B (en) Plane generation method and device, storage medium and electronic equipment
CN117745981A (en) Image generation method, device, electronic equipment and storage medium
CN118057466A (en) Control method and device based on augmented reality, electronic equipment and storage medium
CN117784923A (en) VR-based display method, device, equipment and medium
CN118115636A (en) Avatar driving method, apparatus, electronic device, storage medium, and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination