CN116542659A - Resource allocation method, device, electronic equipment and storage medium - Google Patents

Resource allocation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116542659A
CN116542659A CN202310378033.9A CN202310378033A CN116542659A CN 116542659 A CN116542659 A CN 116542659A CN 202310378033 A CN202310378033 A CN 202310378033A CN 116542659 A CN116542659 A CN 116542659A
Authority
CN
China
Prior art keywords
target
area
resource
pixel point
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310378033.9A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202310378033.9A priority Critical patent/CN116542659A/en
Publication of CN116542659A publication Critical patent/CN116542659A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/04Payment circuits
    • G06Q20/06Private payment circuits, e.g. involving electronic currency used among participants of a common payment scheme
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/12Payment architectures specially adapted for electronic shopping systems
    • G06Q20/123Shopping for digital content
    • G06T3/06
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a resource allocation method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: receiving a resource request for generating a plane point cloud display diagram of a target space, which is sent by a client; acquiring the current residual resource limit of the target account according to the account information of the target account carried by the resource request; and responding to the current residual resource unit meeting the triggering condition of the target resource unit required by generating the plane point cloud display diagram of the target space, and deducting the target resource unit from the current residual resource unit to update the current residual resource unit. According to the method and the device, the Ping Miandian cloud display diagram of the client requirements can be generated based on resource deduction, a proprietary channel for acquiring Ping Miandian cloud display diagrams is provided for the client, and the efficiency of acquiring Ping Miandian cloud display diagrams by the client is improved; by generating Ping Miandian cloud display diagrams, the point cloud effect is presented by adopting a new image display form, and the visual experience of a user is improved.

Description

Resource allocation method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer data processing technologies, and in particular, to a method and apparatus for allocating resources, an electronic device, and a storage medium.
Background
In order to facilitate the user to view, a request for generating a point cloud display diagram of the target space is generally required to be sent to a server through a client so as to acquire the point cloud display diagram produced by the server for display. When the indoor visual positioning is carried out, the point cloud display diagram can be used as an aid, and a user can conveniently check the current positioning.
Currently, the point cloud display as an aid is typically in 3D form. And at present, when the 3D form point cloud display diagram is acquired, the three-dimensional point cloud display diagram is generally acquired based on sparse three-dimensional points generated during visual positioning or acquired based on the point cloud acquired by depth equipment, and a channel for providing the Ping Miandian cloud display diagram is lacked.
It can be seen that the existing point cloud display is usually in a 3D form, the display mode is single, and a proprietary channel for providing a Ping Miandian cloud display is not provided, so that the acquisition efficiency of the Ping Miandian cloud display is low.
Disclosure of Invention
In view of the foregoing, embodiments of the present application provide a resource allocation method, apparatus, electronic device, and storage medium that overcome or at least partially solve the foregoing problems.
In a first aspect, an embodiment of the present application provides a resource allocation method, which is applied to a server, where the method includes:
Receiving a resource request which is sent by a client and used for generating a planar point cloud display diagram of a target space, wherein the resource request is sent by a target account under a target application program on the client, the resource request comprises account information of the target account, the planar point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a target panorama of the target space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panorama segmentation image, and the depth image and the panorama segmentation image are generated based on the target panorama;
acquiring the current residual resource limit of the target account according to the account information of the target account;
and responding to the current residual resource unit meeting the triggering condition of the target resource unit required by generating the plane point cloud display diagram of the target space, and deducting the target resource unit from the current residual resource unit to update the current residual resource unit.
In a second aspect, an embodiment of the present application provides a resource allocation device, applied to a server, where the device includes:
The device comprises a first receiving module, a second receiving module and a third receiving module, wherein the first receiving module is used for receiving a resource request which is sent by a client and used for generating a planar point cloud display diagram of a target space, the resource request is sent by a target account under a target application program on the client, the resource request comprises account information of the target account, the planar point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a target panorama of the target space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panorama segmentation image, and the depth image and the panorama segmentation image are generated based on the target panorama;
the acquisition module is used for acquiring the current residual resource limit of the target account according to the account information of the target account;
and the processing module is used for responding to the current residual resource unit to meet the triggering condition of the target resource unit required by generating the plane point cloud display diagram of the target space, generating the plane point cloud display diagram of the target space and deducting the target resource unit from the current residual resource unit to update the current residual resource unit.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program is executed by the processor to implement the steps of the resource allocation method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the resource allocation method as described in the first aspect above.
According to the technical scheme, when a resource request for generating the planar point cloud display diagram of the target space, which is sent by a client through a target account under a target application program, is received, based on account information of the target account carried in the resource request, the current residual resource quota of the target account is obtained, when the current residual resource quota meets the target resource quota required by the planar point cloud display diagram of the target space, the planar point cloud display diagram of the target space is generated, and the target resource quota is deducted from the current residual resource quota to update the current residual resource quota, so that the planar point cloud display diagram required by the client can be generated based on resource deduction, the planar point cloud display diagram of the target space is interacted by the client and the server through resource interchange, the proprietary of obtaining Ping Miandian cloud display diagrams is provided for the client, and the efficiency of obtaining Ping Miandian cloud display diagrams by the client is improved.
By generating Ping Miandian cloud display diagrams, the point cloud effect is presented in a new image display form, and the visual experience of a user is improved; by calculating the three-dimensional coordinates of the pixel points based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, and further, a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized.
Drawings
Fig. 1 is a schematic diagram of a resource allocation method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a method for generating Ping Miandian cloud display provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a method for determining a coordinate set based on a depth image according to an embodiment of the present application;
fig. 4 shows a specific example of calculating three-dimensional coordinates of a wall pixel according to an embodiment of the present application;
FIG. 5 shows a specific illustration of a Ping Miandian cloud display provided by an embodiment of the present application;
fig. 6 shows a schematic diagram of a resource allocation apparatus according to an embodiment of the present application;
fig. 7 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. The plurality of embodiments of the present application may include two and more than two.
In various embodiments of the present application, it should be understood that the sequence numbers of the following processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
An embodiment of the present application provides a resource allocation method, applied to a server, as shown in fig. 1, where the method includes:
step 1, receiving a resource request which is sent by a client and used for generating a planar point cloud display diagram of a target space, wherein the resource request is sent by a target account under a target application program on the client, and the resource request comprises account information of the target account, the planar point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a target panorama of the target space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panorama segmentation image, and the depth image and the panorama segmentation image are generated based on the target panorama.
The resource allocation method is applied to a server, the server and a client establish connection based on a target application program, and the server receives a resource request which is sent by the client and is used for generating a plane point cloud display diagram of a target space based on the connection between the server and the client.
The target space in this embodiment is a real physical space, and the physical space may refer to a building such as a house, a mall, an office building, a gym, etc., for example, the target space may be an office building, a commercial building, a residential building, etc.; the physical space may also be a building of a single spatial structure, e.g. the target space may be a room in a house. That is, the resource request sent by the client to generate the planar point cloud representation of the target space may be a request sent for a room source or a particular room.
The resource request sent by the client is used to request generation of a flat point cloud representation of the target space, i.e. the resource here can be understood as a representation resource. For the client, after receiving the trigger of the user to the point cloud display control in the target application page, generating a resource request, wherein the target application page is an application page for displaying the related information of the target space in the target application program, and the target application program can be an application program for supporting house source information display. The server in this embodiment may be an application server corresponding to the target application program. That is, after the client starts the target application, the content displayed through the target application page may include a related introduction of the target space and a point cloud display control, and based on the triggering of the user in the target application page, the client sends a resource request to the server through the target application.
When the client sends a resource request through the target application program, the resource request is sent through the target account under the target application program, the sent resource request carries account information of the target account, and the server can identify different clients based on the account information of the target account.
The planar point cloud display diagram of the target space is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in the target panorama, the three-dimensional coordinates corresponding to the pixel points are determined based on the depth image and the panorama segmentation image, the depth image and the panorama segmentation image are generated based on the target panorama, and compared with the situation that the three-dimensional coordinates of the pixel points are obtained only based on the depth image, the problem that the deviation is larger when the scene edge is calculated based on the depth image can be avoided to a certain extent.
The color characteristic value carried by the pixel point is a pixel value corresponding to the pixel point in the panorama, the pixel value of the pixel point is determined by the value corresponding to the R (red) G (green) B (blue) three elements, and the values of the RGB three elements are all between 0 and 255. The process of projecting the pixel points carrying the color characteristic values can be understood as a process of determining the plane coordinates corresponding to the pixel points based on the three-dimensional coordinates, so as to obtain Ping Miandian cloud display diagrams based on plane projection.
Because the three-dimensional coordinates of the pixel points are calculated based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, and then after plane projection, a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized.
And step 2, acquiring the current residual resource limit of the target account according to the account information of the target account.
After the server side obtains the resource request of the planar point cloud display diagram of the generated target space sent by the client side, obtaining account information of a target account carried in the resource request, and obtaining the current residual resource limit corresponding to the target account at the server side based on the account information of the target account, wherein the resource limit can be understood as the limit of the charged electronic money.
And step 3, generating a plane point cloud display diagram of the target space in response to the current residual resource amount meeting a triggering condition of the target resource amount required for generating the plane point cloud display diagram of the target space, and deducting the target resource amount from the current residual resource amount to update the current residual resource amount.
After the current residual resource limit of the target account is obtained, the current residual resource limit is compared with the target resource limit required by the plane point cloud display diagram of the target space, when the current residual resource limit meets the target resource limit required by the plane point cloud display diagram of the target space, the plane point cloud display diagram of the target space is generated, and the target resource limit is deducted from the current residual resource limit based on the target resource limit so as to update the current residual resource limit.
By responding to the resource request, the current residual resource limit of the target account is obtained, when the current residual resource limit meets the target resource limit required by the generation of the planar point cloud display diagram of the target space, the planar point cloud display diagram of the target space is generated, the current residual resource limit is updated, the planar point cloud display diagram required by the client can be generated based on resource deduction, and further the client can obtain the required image resource by paying the electronic money resource to the server, so that the client and the server interact with respect to the planar point cloud display diagram of the target space through resource interchange. And the server side provides a proprietary channel for obtaining Ping Miandian cloud display diagrams for the client side by generating Ping Miandian cloud display diagrams based on the resource request, so that the efficiency of obtaining Ping Miandian cloud display diagrams for the client side is improved.
According to the implementation process, when the resource request for generating the planar point cloud display diagram of the target space, which is sent by the client through the target account under the target application program, is received, the current residual resource quota of the target account is acquired based on the account information of the target account carried in the resource request, when the current residual resource quota meets the target resource quota required by the planar point cloud display diagram of the target space, the planar point cloud display diagram of the target space is generated, and the target resource quota is deducted from the current residual resource quota to update the current residual resource quota, so that the planar point cloud display diagram required by the client can be generated based on resource deduction, the planar point cloud display diagram of the target space is interacted by the client and the server through resource interchange, the proprietary for acquiring Ping Miandian cloud display diagrams is provided for the client, and the efficiency of the client for acquiring Ping Miandian cloud display diagrams is improved.
By generating Ping Miandian cloud display diagrams, the point cloud effect is presented in a new image display form, and the visual experience of a user is improved; by calculating the three-dimensional coordinates of the pixel points based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, and further, a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized.
As an optional embodiment, after generating the planar point cloud display diagram of the target space, the server further includes: and feeding back the planar point cloud display diagram of the target space to a target application program of the client under the condition that the release notice of the planar point cloud display diagram of the target space, which is sent by the client, is received, so that the client displays the planar point cloud display diagram of the target space through an application page of the target application program.
The client can send a release notice to the server when receiving release operation of a user for the target application program, and the server sends the generated plane point cloud display diagram of the target space to the client based on the release notice to finish delivery of the image resource, so that the client obtains the required image resource.
The process of obtaining the current residual resource unit of the target account and updating the current residual resource unit is described below. The server side comprises a mapping list for storing the mapping relation between account information and resource amount;
the obtaining the current residual resource unit of the target account according to the account information of the target account includes:
and determining the matched resource amount in the mapping list based on the account information of the target account, and determining the determined resource amount as the current residual resource amount of the target account.
The server side comprises a mapping list, and the mapping list stores the mapping relation between account information and resource amount. After acquiring a resource request carrying account information of a target account, the server searches for a matched resource quota in a mapping list of a storage mapping relation based on the account information of the target account, and determines the searched resource quota as a current residual resource quota corresponding to the target account so as to acquire the latest resource quota corresponding to the target account at the server.
Correspondingly, the deducting the target resource quota from the current residual resource quota to update the current residual resource quota comprises:
and deducting the current residual resource limit of the target account stored in the mapping list from the target resource limit, and updating the current residual resource limit of the target account in the mapping list.
When the plane point cloud display diagram of the target space is generated and the target resource quota is deducted from the current residual resource quota of the target account in response to the trigger condition, the current residual resource quota of the target account can be found in the mapping list based on the mapping relation, the current residual resource quota of the target account stored in the mapping list is deducted from the target resource quota, so that the current residual resource quota of the target account in the mapping list is updated, the update of the residual resource quota corresponding to the target account is realized, and the latest resource quota corresponding to the target account at the server side can be conveniently obtained based on the mapping list.
According to the embodiment, after receiving the resource request, the server side acquires account information of the target account carried in the resource request, and determines the matched resource limit in the mapping list based on the account information of the target account, so that the current residual resource limit corresponding to the target account is acquired; when the plane point cloud display diagram of the target space is generated based on the trigger condition, the current residual resource limit of the target account stored in the mapping list is deducted from the target resource limit, and the update of the residual resource limit corresponding to the target account is realized, so that the latest resource limit corresponding to the target account at the server side is conveniently obtained.
In an optional embodiment, the generating the planar point cloud display of the target space in response to the current remaining resource amount meeting a trigger condition of the target resource amount required for generating the planar point cloud display of the target space includes:
and generating a plane point cloud display diagram of the target space based on the target panorama of the target space in response to the triggering condition that the current residual resource limit is greater than or equal to the target resource limit.
After the current residual resource limit of the target account is obtained, the current residual resource limit is compared with the target resource limit required by the generation of the planar point cloud display diagram of the target space, and when the current residual resource limit is greater than or equal to the target resource limit, the trigger condition that the current residual resource limit meets the target resource limit required by the generation of the planar point cloud display diagram of the target space is determined and monitored, and at the moment, the planar point cloud display diagram of the target space can be generated according to the target panoramic diagram of the target space.
Because the current residual resource limit of the target account is greater than or equal to the target resource limit, the target resource limit can be just deducted based on the current residual resource limit, or the target resource limit can be deducted and then the residual resource limit exists, and therefore the plane point cloud display diagram of the target space can be generated based on the resource deduction. And when generating the planar point cloud display diagram of the target space, the planar point cloud display diagram can be generated based on the target panorama corresponding to the target space, so as to obtain Ping Miandian cloud display diagram based on the target panorama.
According to the embodiment, when the current residual resource limit of the target account is compared with the target resource limit required by generating the planar point cloud display diagram of the target space and the current residual resource limit is determined to be greater than or equal to the target resource limit, the planar point cloud display diagram of the target space is generated according to the target panorama, and the planar point cloud display diagram of the target space can be generated based on resource deduction.
In an alternative embodiment, the method further comprises: receiving a resource allocation request sent by the client through the target account, wherein the resource allocation request carries a first resource quota; and responding to the resource configuration request, and updating the current residual resource limit of the target account according to the first resource limit.
In this embodiment, the server may receive a resource configuration request sent by the client through the target account under the target application based on the connection established between the target application and the client. The resource allocation request carries the first resource amount, and the resource allocation request can be regarded as an electronic money resource recharging request.
The server side can update the current residual resource limit corresponding to the target account at the server side according to the received resource allocation request carrying the first resource limit, so as to update the resource limit in a client side recharging mode. And when the current residual resource limit corresponding to the target account is updated, adding the first resource limit on the basis of the existing residual resource limit so as to update the current residual resource limit of the target account.
For example, the existing resource unit corresponding to the target account at the server is 30 yuan, after receiving a resource configuration request carrying a first resource unit (for example, 50 yuan) sent by the client, the target account updates the current remaining resource unit based on the first resource unit, and determines that the updated remaining resource unit is 80 yuan.
According to the embodiment, after the resource allocation request carrying the first resource amount sent by the client through the target account is received, the first resource amount is added on the basis of the existing resource amount, and the resource amount is updated in a recharging mode of the client.
As an alternative embodiment, the method further comprises: determining overdraft resource limit corresponding to the target account based on the account grade of the target account; and updating the current residual resource limit of the target account according to the overdraft resource limit corresponding to the target account.
The server side can determine the account grade corresponding to the target account based on the account information of the target account, and the account grade can be determined based on parameters such as the account login frequency, the account online time length and the like. Under the condition that the server side allows overdraft of the account, the overdraft resource limit corresponding to the target account can be determined based on the corresponding relation between the account grade and the overdraft resource limit, and then the current residual resource limit of the target account is updated according to the overdraft resource limit corresponding to the target account.
The account level may be positively correlated with the overdraft resource credit, i.e., the higher the account level, the greater the corresponding overdraft resource credit. When the current residual resource limit of the target account is updated according to the overdraft resource limit corresponding to the target account, if the current residual resource limit of the target account is 0 when not updated, the overdraft resource limit corresponding to the target account can be directly determined as the updated residual resource limit; if the current residual resource limit of the target account is not 0 (is positive number) when the target account is not updated, overdraft resource limit can be added on the basis of the current residual resource limit, and the updated residual resource limit is determined.
After updating the current residual resource unit of the target account based on the overdraft resource unit corresponding to the target account, if a resource allocation request sent by the client through the target account is received, the resource unit corresponding to the resource allocation request can be accumulated on the basis of the current residual resource unit, so that the residual resource unit is updated again.
According to the embodiment, the overdraft resource limit corresponding to the target account can be determined based on the account grade of the target account, the overdraft resource limit is accumulated on the basis of the existing resource limit, and the latest residual resource limit is determined so as to update the residual resource limit.
The following describes a process of generating a planar point cloud display diagram of a target space, referring to fig. 2, when generating the planar point cloud display diagram of the target space, the process includes the following steps:
and step 31, acquiring the depth image and the panoramic segmentation image based on the target panoramic image of the target space.
The target panorama is a panorama determined by performing panorama shooting on a target space and performing vertical correction processing, and a panorama displayed in a Virtual Reality (VR) scene is a VR panorama. And processing the target panorama of the target space to obtain a depth image and a panorama segmentation image corresponding to the target panorama.
Depth images, also known as range images, refer to images having as pixel values the distance (depth) from an image collector to points in a scene, which directly reflect the geometry of the visible surface of the scene; the panoramic segmentation image is an image determined by dividing the panoramic image into areas on the basis of the panoramic image and setting category labels for the pixel points of the areas, and the category labels corresponding to the pixel points in the panoramic image can be obtained on the basis of the panoramic segmentation image.
And step 32, according to the depth image and the panoramic segmentation image, acquiring three-dimensional coordinates corresponding to each target pixel point in a target set, wherein the target set comprises at least part of the pixel points in the target panoramic image.
After the depth image and the panoramic segmented image corresponding to the target space are obtained based on the target panoramic image, three-dimensional coordinates corresponding to at least part of pixel points in the target panoramic image can be obtained according to the depth image and the panoramic segmented image, the depth image is combined on the basis of the panoramic segmented image, and the three-dimensional coordinates corresponding to each target pixel point in the target set corresponding to the target panoramic image are calculated.
The target set is a pixel point set and comprises at least part of pixel points in the target panoramic image, and based on the cooperation of the depth image and the panoramic segmentation image, three-dimensional coordinates corresponding to all the target pixel points in the target set are calculated, so that the three-dimensional coordinates corresponding to at least part of pixel points in the target panoramic image are obtained.
By combining the depth image on the basis of the panoramic segmented image, the three-dimensional coordinates of the pixel points are calculated, and compared with the situation that the three-dimensional coordinates of the pixel points are obtained only based on the depth image, the problem that the deviation is larger when the scene edge is calculated based on the depth image can be avoided to a certain extent.
And step 33, based on the three-dimensional coordinates of part of target pixel points in the target set, projecting the part of target pixel points carrying color characteristic values to obtain a planar point cloud display diagram corresponding to the target space, wherein the part of target pixel points are pixel points meeting projection requirements.
After the three-dimensional coordinates corresponding to each target pixel point in the target set are obtained, partial target pixel points meeting the projection requirement can be determined in the target set, and projection is carried out on partial target pixel points carrying color characteristic values based on the determined three-dimensional coordinates of the partial target pixel points so as to obtain a plane point cloud display diagram corresponding to the target space through plane projection.
Because the three-dimensional coordinates of the pixel points are calculated by combining the depth image on the basis of the panoramic segmentation image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, further, after plane projection, a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized.
According to the implementation process, under the condition that the target panoramic image corresponding to the target space is obtained, the depth image and the panoramic segmentation image are obtained based on the target panoramic image, the depth image is combined on the basis of the panoramic segmentation image, three-dimensional coordinates corresponding to each target pixel point in the target set corresponding to the target panoramic image are calculated, and the calculation accuracy of the three-dimensional coordinates of the scene edge pixel points can be improved; after the three-dimensional coordinates of all target pixel points in the target set are determined, screening out partial target pixel points meeting projection requirements, projecting partial target pixel points carrying color characteristic values based on the three-dimensional coordinates of the partial target pixel points, obtaining a planar point cloud display diagram corresponding to a target space, obtaining a planar point cloud display diagram with better quality, and optimizing the display effect of the Ping Miandian cloud display diagram; and because professional equipment is not needed, the dependence on equipment is reduced, and the cost is saved while the point cloud picture display effect is ensured.
The following describes how to determine a depth image and a panoramic segmented image, where the obtaining the depth image and the panoramic segmented image based on a target panorama of the target space includes:
Predicting a depth value of a pixel point in the target panoramic image based on a depth image model, and acquiring the depth image;
and carrying out category prediction on the pixel points in the target panoramic image based on a semantic segmentation model, and obtaining the panoramic segmentation image carrying category labels corresponding to the pixel points in the target panoramic image.
When determining the depth image corresponding to the target space based on the target panoramic image, the target panoramic image can be processed by adopting a depth image model to obtain the depth value of each pixel point in the target panoramic image, and further obtain the depth image.
The depth map model is obtained by training the open source image and the real depth map data, and can adopt a common encoding-decoding network, such as a unet and other structures. By inputting the target panorama into the depth map model, the depth value of each pixel point in the target panorama can be predicted based on the depth map model, and further a depth image representing the depth value corresponding to each pixel point in the target panorama is obtained.
When determining the panoramic segmentation image corresponding to the target space based on the target panoramic image, the semantic segmentation model can be adopted to process the target panoramic image so as to obtain category labels corresponding to each pixel point in the target panoramic image, and then the panoramic segmentation image is obtained. Because the panoramic segmentation image comprises class labels of all pixel points in the target panoramic image, the pixel points in the target panoramic image can be classified based on the class labels of the pixel points, and further the segmentation of the target panoramic image based on the class labels of the pixel points can be realized, the segmentation is not true image segmentation, the region division of the target panoramic image can be regarded as the region division based on the class labels of the pixel points, and the class labels corresponding to the pixel points in the same region can be the same.
The segmentation labels of the semantic segmentation model mainly comprise ceilings, wall surfaces and floors, and also comprise other object types, such as indoor furniture in a target space, such as tables, chairs, beds and the like. The semantic segmentation model is obtained by training data of an open-source image and a real pixel class label, and a model structure can also adopt a common encoding-decoding network. By inputting the target panoramic image into the semantic segmentation model, the class labels of all the pixel points in the target panoramic image can be predicted based on the semantic segmentation model, and further the panoramic segmentation image carrying the class labels corresponding to all the pixel points in the target panoramic image is obtained.
According to the embodiment, after the target panoramic image of the target space is obtained, the target panoramic image can be respectively input into the depth image model and the semantic segmentation model, the depth value prediction is carried out on the pixel points in the target panoramic image based on the depth image model so as to obtain the depth image, the category prediction is carried out on the pixel points in the target panoramic image based on the semantic segmentation model, the panoramic segmentation image carrying the category labels respectively corresponding to the pixel points in the target panoramic image is obtained, the depth image and the panoramic segmentation image are obtained based on the training mature model, and the processing efficiency is improved while the quality is ensured.
The process of calculating three-dimensional coordinates of pixel points based on the depth image and the panorama-cut image is described below. The obtaining three-dimensional coordinates corresponding to each target pixel point in the target set according to the depth image and the panoramic segmentation image includes:
determining all pixel points corresponding to a first class label and all pixel points corresponding to a second class label in the target panoramic image according to the panoramic segmented image, wherein the panoramic segmented image comprises class labels respectively corresponding to all pixel points in the target panoramic image, the pixel points corresponding to the first class label are located in a first area, and the pixel points corresponding to the second class label are located in a second area;
determining a first coordinate set comprising three-dimensional coordinates corresponding to the pixel points of the first area and a second coordinate set comprising three-dimensional coordinates corresponding to the pixel points of the second area based on the depth image;
according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least part of pixel points located in a third area of the target panoramic image is obtained, and a fourth coordinate set corresponding to at least part of pixel points located in a fourth area of the target panoramic image is obtained;
The pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points;
the first area, the second area and the third area are respectively a top area, a ground area and a wall area, which correspond to the target space on the target panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the target panoramic image.
After generating the depth image and the panoramic segmented image based on the target panoramic image, three-dimensional coordinates corresponding to each target pixel point in the target set corresponding to the target panoramic image can be obtained based on the cooperation of the depth image and the panoramic segmented image.
Since the panoramic segmented image includes class labels to which each pixel point in the target panorama corresponds, all the pixel points in the target panorama corresponding to the first class labels and all the pixel points corresponding to the second class labels can be determined based on the panoramic segmented image. The first class label corresponds to a first region of the target panorama and the second class label corresponds to a second region of the target panorama, i.e. the pixel points corresponding to the first class label are located in the first region and the pixel points corresponding to the second class label are located in the second region. The first region is a top region, such as a ceiling region, of the target space corresponding to the target panorama; the second area is a ground area corresponding to the target space on the target panorama.
After all the pixel points which are positioned in the first area and correspond to the first class labels in the target panoramic image are acquired, three-dimensional coordinates corresponding to the pixel points in the first area respectively can be determined based on the depth image, and a first coordinate set is acquired based on the determined three-dimensional coordinates; after all the pixel points located in the second area and corresponding to the second class labels in the target panoramic image are acquired, three-dimensional coordinates corresponding to the pixel points in the second area respectively can be determined based on the depth image, and a second coordinate set is acquired based on the determined three-dimensional coordinates.
In the case of determining the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least a part of the pixels located in the third region of the target panorama and a fourth coordinate set corresponding to at least a part of the pixels located in the fourth region of the target panorama may be obtained according to at least one of the first coordinate set and the second coordinate set.
The third area is a wall area corresponding to the target space on the target panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the target panoramic image. The fourth region of the target panorama may be regarded as a region remaining after the first, second, and third regions are removed from the target panorama, and may be regarded as a region surrounded by the first, second, and third regions.
The third coordinate set comprises three-dimensional coordinates corresponding to at least part of pixel points of the third region respectively, and the three-dimensional coordinates corresponding to any pixel point in the third region need to be determined based on the three-dimensional coordinates of corresponding pixel points in the first coordinate set or the second coordinate set; the fourth coordinate set includes three-dimensional coordinates corresponding to at least part of the pixel points in the fourth area, and the three-dimensional coordinates corresponding to any pixel point in the fourth area need to be determined based on the three-dimensional coordinates of the corresponding pixel point in the first coordinate set or the second coordinate set.
The pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points, and the target set comprises at least part of pixel points in the target panoramic image because the third coordinate set comprises three-dimensional coordinates corresponding to at least part of pixel points in the third area and the fourth coordinate set comprises three-dimensional coordinates corresponding to at least part of pixel points in the fourth area.
According to the embodiment, all pixel points which are positioned in the first area and correspond to the first type of labels are acquired according to the panoramic segmented image, all pixel points which are positioned in the second area and correspond to the second type of labels are acquired, the acquired pixel points are processed based on the depth image, three-dimensional coordinates respectively corresponding to all pixel points in the first area and three-dimensional coordinates respectively corresponding to all pixel points in the second area can be determined, so that a first coordinate set and a second coordinate set are determined, and coordinate sets respectively corresponding to the first area and the second area are acquired based on the matching of the panoramic segmented image and the depth image; after the first coordinate set and the second coordinate set are determined, the third coordinate set corresponding to the third area and the fourth coordinate set corresponding to the fourth area are obtained based on at least one of the first coordinate set and the second coordinate set, three-dimensional coordinates corresponding to pixel points of other areas are obtained by operation based on the existing coordinate sets, the processing flow is simplified, and the processing efficiency is improved.
The process of determining the first set of coordinates and the second set of coordinates is described below. Referring to fig. 3, when determining, based on the depth image, a first coordinate set including three-dimensional coordinates corresponding to the pixel points of the first region and a second coordinate set including three-dimensional coordinates corresponding to the pixel points of the second region, the method includes:
step 321, based on the depth image, acquiring a first height value corresponding to each pixel point in the first area, and acquiring a second height value corresponding to each pixel point in the second area, where the height value corresponding to each pixel point is a vertical component corresponding to a reference three-dimensional coordinate determined by the pixel point based on the depth image in a height direction.
Step 322, determining a first average height value according to the first height values corresponding to the pixel points in the first area.
Step 323, determining a second average height value according to the second height values corresponding to the pixel points in the second area.
Step 324, determining three-dimensional coordinates corresponding to each pixel point in the first area according to the first average height value, the panoramic pixel coordinates corresponding to each pixel point in the first area and a conversion formula, and determining the first coordinate set based on the three-dimensional coordinates corresponding to each pixel point in the first area; the conversion formula is used for converting panoramic pixel coordinates into three-dimensional coordinates.
Step 325, determining three-dimensional coordinates corresponding to each pixel point in the second area according to the second average height value, the panoramic pixel coordinates corresponding to each pixel point in the second area and the conversion formula, and determining the second coordinate set based on the three-dimensional coordinates corresponding to each pixel point in the second area.
The reference three-dimensional coordinates of the pixel points may be determined based on the depth image, and precisely, the reference three-dimensional coordinates directly determined based on the depth image are not on one 3D plane. For the first region and the second region, the first region and the second region correspond to a plane in the three-dimensional space, so that the depth image is only used for determining the height of the region.
After determining the pixel points corresponding to the first class labels and located in the first area, a reference three-dimensional coordinate corresponding to each pixel point in the first area may be acquired based on the depth image, and calculation may be performed based on the reference three-dimensional coordinate to determine a three-dimensional coordinate (final three-dimensional coordinate) corresponding to the pixel point.
For each pixel point in the first area, after the reference three-dimensional coordinate corresponding to the current pixel point is acquired, a vertical component corresponding to the reference three-dimensional coordinate in the height direction may be acquired, and the vertical component corresponding to the reference three-dimensional coordinate in the height direction is determined as a first height value corresponding to the pixel point. Since the first region is a top region of the target space corresponding to the target panorama, a vertical component corresponding to the reference three-dimensional coordinate in the height direction is a component in a direction parallel to the height of the wall surface.
After the first height value corresponding to each pixel point in the first area is obtained, average value calculation is performed based on the first height value corresponding to each pixel point in the first area, a first average height value is obtained, and the first average height value is used as the height h1 of the first area. And then, determining the real coordinate position (final three-dimensional coordinate) of the pixel point in the three-dimensional space based on the first average height value h1, the panoramic pixel coordinate corresponding to the pixel point and a conversion formula for each pixel point in the first area, wherein the conversion formula is a calculation formula for converting the panoramic pixel point into a 3d coordinate point.
After the three-dimensional coordinates (final three-dimensional coordinates) corresponding to each pixel point in the first area are obtained, the three-dimensional coordinates corresponding to each pixel point in the first area are aggregated, and a first coordinate set corresponding to the first area is obtained.
After determining the pixel points corresponding to the second class labels and located in the second area, a reference three-dimensional coordinate corresponding to each pixel point in the second area may be acquired based on the depth image, and calculation may be performed based on the reference three-dimensional coordinate to determine a three-dimensional coordinate (final three-dimensional coordinate) corresponding to the pixel point.
For each pixel point in the second area, after the reference three-dimensional coordinate corresponding to the current pixel point is acquired, a vertical component corresponding to the reference three-dimensional coordinate in the height direction may be acquired, and the vertical component corresponding to the reference three-dimensional coordinate in the height direction is determined as a second height value corresponding to the pixel point. Because the second area is the ground area corresponding to the target space on the target panorama, the vertical component corresponding to the reference three-dimensional coordinate in the height direction is the component in the direction parallel to the height of the wall surface.
After the second height value corresponding to each pixel point in the second area is obtained, average value calculation is performed based on the second height value corresponding to each pixel point in the second area, a second average height value is obtained, and the second average height value is used as the height h2 of the second area. And then, determining the real coordinate position (final three-dimensional coordinate) of the pixel point in the three-dimensional space based on the second average height value h2, the panoramic pixel coordinate corresponding to the pixel point and a conversion formula for each pixel point in the second area, wherein the conversion formula is a calculation formula for converting the panoramic pixel point into a 3d coordinate point.
After the three-dimensional coordinates (final three-dimensional coordinates) corresponding to each pixel point in the second area are obtained, the three-dimensional coordinates corresponding to each pixel point in the second area are aggregated, and a second coordinate set corresponding to the second area is obtained.
In the above embodiment, the first average height value corresponding to the first area and the second average height value corresponding to the second area may be determined based on the depth image, the three-dimensional coordinates may be determined based on the first average height value, the panoramic pixel coordinates and the conversion formula for each pixel point in the first area, and the three-dimensional coordinates may be determined based on the second average height value, the panoramic pixel coordinates and the conversion formula for each pixel point in the second area, so as to obtain the first coordinate set corresponding to the first area and the second coordinate set corresponding to the second area based on the depth image.
The process of determining the third coordinate set and the fourth coordinate set is described below. The obtaining, according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least a part of pixels located in a third area of the target panorama, and obtaining a fourth coordinate set corresponding to at least a part of pixels located in a fourth area of the target panorama, includes:
for each pixel point in the third area and the fourth area, searching a first pixel point which is associated with the current pixel point in the column direction and is positioned in the first area or the second area, wherein the first pixel point is intersected with a second pixel point corresponding to the current pixel point, and the second pixel point is in the same column with the current pixel point and is intersected with the first area or the second area;
under the condition that the first pixel point is found, based on the first coordinate set or the second coordinate set, acquiring a three-dimensional coordinate corresponding to the first pixel point;
according to the three-dimensional coordinates corresponding to the first pixel points, a first distance between a projection point corresponding to a virtual camera in a three-dimensional live-action space model and a first pixel point associated with the current pixel point in the column direction is obtained, wherein the three-dimensional live-action space model is a three-dimensional space model corresponding to the target space, and in the three-dimensional live-action space model, a connecting line of the projection point and the first pixel point is perpendicular to the column direction of the current pixel point;
Determining a depth value of the current pixel point based on the first distance and the panorama latitude angle corresponding to the current pixel point, and determining a three-dimensional coordinate of the current pixel point based on the depth value of the current pixel point;
determining the third coordinate set according to the three-dimensional coordinates corresponding to at least part of the pixel points in the third region, and determining the fourth coordinate set according to the three-dimensional coordinates corresponding to at least part of the pixel points in the fourth region;
wherein the pixel points in the third region correspond to a third category label; the pixel points in the fourth region correspond to at least one fourth category label.
For each pixel point in the third area and the fourth area, a first pixel point associated with the current pixel point in the column direction can be searched in the first area or the second area, and under the condition that the first pixel point is searched, the three-dimensional coordinate corresponding to the first pixel point is obtained based on the first coordinate set or the second coordinate set.
When searching for the first pixel point associated with the current pixel point, the second pixel point which is in the same column with the current pixel point and intersects with the first area or the second area can be searched first. When searching the second pixel point, the second pixel point which is in the same column with the current pixel point and is intersected with the first area can be searched first, if the second pixel point which is in the same column with the current pixel point and is intersected with the second area can not be searched, and if the second pixel point which is in the same column with the current pixel point and is intersected with the second area can not be searched, the second pixel point corresponding to the current pixel point is determined to be absent.
After the second pixel point corresponding to the current pixel point is found, if the second pixel point is intersected with the first area, determining the pixel point intersected with the second pixel point in the first area as the first pixel point, and if the second pixel point is intersected with the second area, determining the pixel point intersected with the second pixel point in the second area as the first pixel point, so that the first pixel point associated with the current pixel point is obtained. Aiming at the situation that the first pixel point is located in the first area, the three-dimensional coordinate corresponding to the first pixel point can be directly searched in the first coordinate set corresponding to the first area; for the case that the first pixel point is located in the second area, the three-dimensional coordinate corresponding to the first pixel point can be directly found in the second coordinate set corresponding to the second area.
After the three-dimensional coordinates corresponding to the first pixel point are obtained, a first distance between a projection point of the virtual camera in the three-dimensional live-action space model on the target plane and the first pixel point associated with the current pixel point in the column direction can be obtained according to the three-dimensional coordinates corresponding to the first pixel point. The target plane may be a top of the target space or an end surface of the ground corresponding to the three-dimensional real scene space model, where the projection point corresponding to the virtual camera and the first pixel point are both located on the target plane, and in the three-dimensional real scene space model, a connection line between the projection point of the virtual camera and the first pixel point is perpendicular to a column direction (height direction) where the current pixel point is located. The three-dimensional real-scene space model is a three-dimensional space model corresponding to the target space, the virtual camera can be the origin of coordinates of the three-dimensional real-scene space model, but the three-dimensional real-scene space model is not limited to the origin, the virtual camera can be spaced from the ground, the top end surface and the wall surface in the three-dimensional real-scene space model by a certain distance, and the virtual camera can also be arranged at any position.
After the first distance corresponding to the current pixel point is obtained, determining a depth value of the current pixel point based on the first distance corresponding to the current pixel point and the panorama latitude angle corresponding to the current pixel point, and determining three-dimensional coordinates of the current pixel point based on the depth value of the current pixel point so as to operate based on the matched three-dimensional coordinates of the first pixel point to obtain the three-dimensional coordinates of the current pixel point. When determining the three-dimensional coordinates of the pixel points based on the depth values of the pixel points, the calculation may be performed based on the depth values of the pixel points, the panoramic pixel coordinates of the pixel points, and the conversion formula.
When the first pixel point associated with the current pixel point in the column direction is searched in the first area or the second area, the first area may be searched first, if the associated first pixel point cannot be searched in the first area, the second area may be searched continuously, and if the associated first pixel point cannot be searched in the second area, the three-dimensional coordinate corresponding to the current pixel point is directly abandoned. Of course, it is also possible to search in the second area first and then search in the first area, which is not particularly limited in this embodiment.
Because of the situation that other objects can be blocked between the ground and the wall surface and the situation that other objects can be blocked between the top and the wall surface, the associated first pixel points can not be found in the second area and the first area aiming at the pixel points in the third area; because there are situations in which other objects are placed on the ground and suspended on top, for the pixel points in the fourth area, the associated first pixel point may not be found in the second area and the first area.
The third area is a wall surface area corresponding to the target space on the target panoramic image, and pixel points in the third area correspond to third class labels; the pixel points in the fourth area correspond to the fourth category labels, and the number of the fourth category labels is at least one, and as the fourth area can comprise one or more pieces of furniture, different pieces of furniture can correspond to the same category labels, or one piece of furniture can correspond to a fourth category label.
Calculating corresponding three-dimensional coordinates for each pixel point in the third area, and determining a third coordinate set after obtaining the three-dimensional coordinates corresponding to at least part of the pixel points in the third area; after calculating the corresponding three-dimensional coordinates for each pixel point in the fourth area and obtaining the three-dimensional coordinates corresponding to at least part of the pixel points in the fourth area, a fourth coordinate set may be determined.
In the above embodiment, for the pixel points in the third and fourth areas, the associated first pixel point may be searched, and the three-dimensional coordinates of the current pixel point may be calculated based on the three-dimensional coordinates of the first pixel point, so as to determine the third and fourth coordinate sets with the aid of the first and/or second coordinate sets.
As an optional embodiment, the determining the depth value of the current pixel point based on the first distance and the panorama latitude angle corresponding to the current pixel point includes:
determining the distance between the virtual camera and the current pixel point in the three-dimensional live-action space model based on the ratio of the first distance to the cosine value of the latitude angle of the panoramic image corresponding to the current pixel point;
and determining the distance between the virtual camera and the current pixel point as the depth value of the current pixel point.
The first distance is a first distance between a projection point of the virtual camera on the target plane and a first pixel point associated with a current pixel point in a column direction, and a panoramic latitude angle a corresponding to the current pixel point can be shown in fig. 4, d in fig. 4 represents the first distance, P represents the current pixel point, and Q represents the first pixel point associated with the current pixel point. The distance between the virtual camera and the current pixel point in the three-dimensional live-action space model can be determined based on the ratio of the first distance to the cosine value of the latitude angle of the panoramic image corresponding to the current pixel point, the distance between the virtual camera and the current pixel point is determined to be the depth value of the current pixel point, and the depth value of the current pixel point is obtained based on operation. Fig. 4 illustrates a case where the current pixel is a pixel on the wall surface area.
Taking fig. 4 as an example, a process of searching for the first pixel associated with the current pixel and calculating the depth value of the current pixel will be described. The current pixel point P is a pixel point on a wall area, the junction point of the wall pixel point and the ground pixel point on the column of the panoramic image where the current pixel point P is located is calculated, the 3d coordinate Xq of the ground pixel point Q at the junction point is taken, so that the distance d between the projection of the virtual camera on the ground and the ground pixel point Q can be obtained, and the connecting line between the projection of the virtual camera on the ground and the ground pixel point Q is perpendicular to the corresponding straight line of the current pixel point P in the column direction.
Based on the panoramic latitude angle a of the P point and the distance d, a depth value of the current pixel point P is obtained by adopting trigonometric function operation, so that a 3d point coordinate Xp of the current pixel point P is obtained. If the boundary point with the ground pixel point is not found, trying to find the boundary point with the ceiling pixel point, and if the boundary point is not found, abandoning to calculate the 3d point position of the current pixel point.
In the above embodiment, trigonometric function operation may be performed based on the first distance and the panorama latitude angle corresponding to the current pixel point, and the depth value of the current pixel point may be determined based on the operation, so as to determine the three-dimensional coordinate based on the depth value.
The following describes a plane projection process, wherein the projecting the partial target pixel points carrying color feature values based on the three-dimensional coordinates of the partial target pixel points in the target set to obtain a plane point cloud display diagram corresponding to the target space includes:
screening out partial target pixel points meeting projection requirements from the target set based on a preset rule;
based on the three-dimensional coordinates of the partial target pixel points, projecting the partial target pixel points carrying color characteristic values to a preset plane, and obtaining a plane point cloud display diagram;
and determining the color characteristic value carried by the target pixel point based on the target panoramic image, wherein the pixel point positioned in the first area does not meet the projection requirement.
After the target set is determined, part of target pixel points meeting projection requirements can be screened out from the target set based on a preset rule, and part of target pixel points carrying color characteristic values are projected to a preset plane based on three-dimensional coordinates of the screened part of target pixel points so as to obtain a Ping Miandian cloud display diagram through plane projection. Referring to fig. 5, a specific illustration of a planar point cloud display diagram corresponding to a room is shown, and colors of different areas are not shown in fig. 5.
The virtual camera can be separated from the top end face of the three-dimensional live-action space model, the ground and the wall surface by a certain distance, and can be arranged at any position. Based on shooting of the virtual camera in the three-dimensional live-action space model, a target panoramic image corresponding to the target space can be obtained.
When the target pixel points are screened based on the preset rule, the pixel points corresponding to the first area can be filtered aiming at the target set, so that the pixel point screening is realized, and the pixel points meeting the projection requirement are obtained. Since the projection of the pixels of the first region onto the ground region affects the projection effect, it is necessary to filter the pixels.
When the pixel point screening is performed based on a preset rule, the target pixel point below the horizontal plane where the virtual camera is located can be screened out, so that the pixel point meeting the projection requirement is screened out from the target set. Other strategies for screening the target pixels may, of course, be employed, and are not further described herein.
When the target pixel point carrying the color characteristic value is projected to a preset plane, the target pixel point is actually projected to the ground of the three-dimensional real scene space model.
In order to simplify the projection operation, necessary target pixels can be selected from the target set, so that the problem of heavy workload of the projection operation caused by excessive pixels participating in the projection is avoided.
The above is an overall implementation process of the resource allocation method provided in the embodiment of the present application, when a resource request for generating a planar point cloud display diagram of a target space, which is sent by a client through a target account under a target application program, is received, based on account information of the target account carried in the resource request, a current remaining resource quota of the target account is obtained, when the current remaining resource quota meets a target resource quota required for generating the planar point cloud display diagram of the target space, the planar point cloud display diagram of the target space is generated, and the target resource quota is deducted from the current remaining resource quota to update the current remaining resource quota, so that the planar point cloud display diagram required by the client can be generated based on resource deduction, and interaction is performed on the planar point cloud display diagram of the target space by the client and the server through resource interchange, thereby providing a proprietary channel for the client to acquire Ping Miandian cloud display diagrams, and improving efficiency of the client to acquire Ping Miandian cloud display diagrams.
By generating Ping Miandian cloud display diagrams, the point cloud effect is presented in a new image display form, and the visual experience of a user is improved; by calculating the three-dimensional coordinates of the pixel points based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, and further, a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized.
Professional equipment is not needed, dependence on equipment is reduced, and cost is saved while the point cloud picture display effect is ensured; acquiring a depth image based on a depth image model and acquiring a panoramic segmentation image based on a semantic segmentation model, so that a demand image based on a training mature model is acquired, the quality is ensured, and the processing efficiency is improved; and determining a third coordinate set and a fourth coordinate set based on the first coordinate set and/or the second coordinate set, so that three-dimensional coordinates corresponding to the pixels in other areas are obtained by operation based on the existing coordinate sets, the processing flow is simplified, and the processing efficiency is improved.
An embodiment of the present application provides a resource allocation device, which is applied to a server, and as shown in fig. 6, the device includes:
a first receiving module 601, configured to receive a resource request sent by a client to generate a planar point cloud display diagram of a target space, where the resource request is sent by a target account under a target application program on the client, and the resource request includes account information of the target account, where the planar point cloud display diagram is an image generated by projecting a portion of pixels carrying color feature values based on three-dimensional coordinates corresponding to a portion of pixels in a target panorama of the target space, and the three-dimensional coordinates corresponding to the pixels are determined based on a depth image and a panorama segmentation image, and the depth image and the panorama segmentation image are generated based on the target panorama;
An obtaining module 602, configured to obtain a current remaining resource unit of the target account according to account information of the target account;
the processing module 603 is configured to generate a planar point cloud display diagram of the target space in response to the current remaining resource quota meeting a trigger condition of the target resource quota required for generating the planar point cloud display diagram of the target space, and deduct the target resource quota from the current remaining resource quota to update the current remaining resource quota.
Optionally, the server side includes a mapping list for storing the mapping relationship between account information and resource amount; the acquisition module is further to:
and determining the matched resource amount in the mapping list based on the account information of the target account, and determining the determined resource amount as the current residual resource amount of the target account.
Optionally, the processing module is further configured to:
and deducting the current residual resource limit of the target account stored in the mapping list from the target resource limit, and updating the current residual resource limit of the target account in the mapping list.
Optionally, the processing module is further configured to:
And generating a plane point cloud display diagram of the target space based on the target panorama of the target space in response to the triggering condition that the current residual resource limit is greater than or equal to the target resource limit.
Optionally, the apparatus further comprises:
the second receiving module is used for receiving a resource configuration request sent by the client through the target account, wherein the resource configuration request carries a first resource limit;
and the updating module is used for responding to the resource configuration request and updating the current residual resource limit of the target account according to the first resource limit.
Optionally, the processing module includes:
the first acquisition sub-module is used for acquiring the depth image and the panoramic segmentation image based on a target panoramic image of the target space;
the second acquisition sub-module is used for acquiring three-dimensional coordinates corresponding to each target pixel point in a target set according to the depth image and the panoramic segmentation image, wherein the target set comprises at least part of pixel points in the target panoramic image;
the projection acquisition sub-module is used for projecting part of target pixel points carrying color characteristic values based on the three-dimensional coordinates of the part of target pixel points in the target set to acquire a plane point cloud display diagram corresponding to the target space, wherein the part of target pixel points are pixel points meeting projection requirements.
Optionally, the first obtaining submodule includes:
the first acquisition unit is used for predicting the depth value of a pixel point in the target panoramic image based on a depth image model, and acquiring the depth image;
the second obtaining unit is used for carrying out category prediction on the pixel points in the target panoramic image based on the semantic segmentation model, and obtaining the panoramic segmentation image carrying category labels corresponding to the pixel points in the target panoramic image.
Optionally, the second acquisition submodule includes:
a first determining unit, configured to determine, according to the panoramic segmented image, all pixels corresponding to a first class label and all pixels corresponding to a second class label in the target panoramic image, where the panoramic segmented image includes class labels corresponding to respective pixels in the target panoramic image, and the pixels corresponding to the first class label are located in a first area and the pixels corresponding to the second class label are located in a second area;
a second determining unit, configured to determine, based on the depth image, a first coordinate set including three-dimensional coordinates corresponding to the pixel points of the first area and a second coordinate set including three-dimensional coordinates corresponding to the pixel points of the second area;
A third obtaining unit, configured to obtain, according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least a part of pixels located in a third area of the target panorama, and obtain a fourth coordinate set corresponding to at least a part of pixels located in a fourth area of the target panorama;
the pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points;
the first area, the second area and the third area are respectively a top area, a ground area and a wall area, which correspond to the target space on the target panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the target panoramic image.
Optionally, the second determining unit includes:
the first acquisition subunit is used for acquiring a first height value corresponding to each pixel point in the first area and a second height value corresponding to each pixel point in the second area based on the depth image, wherein the height value corresponding to each pixel point is a vertical component corresponding to a reference three-dimensional coordinate determined by the pixel point based on the depth image in the height direction;
The first determining subunit is used for determining a first average height value according to the first height values respectively corresponding to the pixel points in the first area;
a second determining subunit, configured to determine a second average height value according to second height values corresponding to each pixel point in the second area respectively;
the third determining subunit is configured to determine, according to the first average height value, the panoramic pixel coordinate corresponding to each pixel point in the first area, and a conversion formula, a three-dimensional coordinate corresponding to each pixel point in the first area, and determine the first coordinate set based on the three-dimensional coordinate corresponding to each pixel point in the first area;
a fourth determining subunit, configured to determine, according to the second average height value, the panoramic pixel coordinate and the conversion formula corresponding to each pixel point in the second area, a three-dimensional coordinate corresponding to each pixel point in the second area, and determine the second coordinate set based on the three-dimensional coordinate corresponding to each pixel point in the second area;
the conversion formula is used for converting panoramic pixel coordinates into three-dimensional coordinates.
Optionally, the third acquisition unit includes:
A searching subunit, configured to search, for each pixel point in the third area and the fourth area, a first pixel point associated with a current pixel point in a column direction and located in the first area or the second area, where the first pixel point intersects a second pixel point corresponding to the current pixel point, and the second pixel point intersects the first area or the second area in the same column as the current pixel point;
the second acquisition subunit is used for acquiring the three-dimensional coordinates corresponding to the first pixel point based on the first coordinate set or the second coordinate set under the condition that the first pixel point is found;
a third obtaining subunit, configured to obtain, according to the three-dimensional coordinate corresponding to the first pixel point, a first distance between a projection point corresponding to a virtual camera in a three-dimensional real-scene space model and a first pixel point associated with a current pixel point in a column direction, where the three-dimensional real-scene space model is a three-dimensional space model corresponding to the target space, and in the three-dimensional real-scene space model, a connection line between the projection point and the first pixel point is perpendicular to the column direction in which the current pixel point is located;
A fifth determining subunit, configured to determine a depth value of the current pixel point based on the first distance and a panorama latitude angle corresponding to the current pixel point, and determine a three-dimensional coordinate of the current pixel point based on the depth value of the current pixel point;
a sixth determining subunit, configured to determine the third coordinate set according to three-dimensional coordinates corresponding to at least some pixel points in the third area, and determine the fourth coordinate set according to three-dimensional coordinates corresponding to at least some pixel points in the fourth area;
wherein the pixel points in the third region correspond to a third category label; the pixel points in the fourth region correspond to at least one fourth category label.
Optionally, the fifth determining subunit is further configured to:
determining the distance between the virtual camera and the current pixel point in the three-dimensional live-action space model based on the ratio of the first distance to the cosine value of the latitude angle of the panoramic image corresponding to the current pixel point;
and determining the distance between the virtual camera and the current pixel point as the depth value of the current pixel point.
Optionally, the projection acquisition submodule includes:
the screening unit is used for screening out partial target pixel points meeting projection requirements from the target set based on a preset rule;
The projection acquisition unit is used for projecting the partial target pixel points carrying the color characteristic values to a preset plane based on the three-dimensional coordinates of the partial target pixel points, and acquiring the plane point cloud display diagram;
and determining the color characteristic value carried by the target pixel point based on the target panoramic image, wherein the pixel point positioned in the first area does not meet the projection requirement.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The embodiment of the application also provides electronic equipment, which comprises: the processor, the memory, the computer program stored in the memory and capable of running on the processor, the computer program realizes each process of the above-mentioned resource allocation method embodiment when being executed by the processor, and can achieve the same technical effect, and for avoiding repetition, the description is omitted here.
For example, fig. 7 shows a schematic diagram of the physical structure of an electronic device. As shown in fig. 7, the electronic device may include: processor 710, communication interface (Communications Interface) 720, memory 730, and communication bus 740, wherein processor 710, communication interface 720, memory 730 communicate with each other via communication bus 740. Processor 710 may invoke logic instructions in memory 730, processor 710 configured to perform steps in the resource allocation method described in any of the embodiments above.
Further, the logic instructions in the memory 730 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements each process of the above-mentioned resource allocation method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is provided herein. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method for allocating resources, which is applied to a server, the method comprising:
receiving a resource request which is sent by a client and used for generating a planar point cloud display diagram of a target space, wherein the resource request is sent by a target account under a target application program on the client, the resource request comprises account information of the target account, the planar point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a target panorama of the target space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panorama segmentation image, and the depth image and the panorama segmentation image are generated based on the target panorama;
acquiring the current residual resource limit of the target account according to the account information of the target account;
and responding to the current residual resource unit meeting the triggering condition of the target resource unit required by generating the plane point cloud display diagram of the target space, and deducting the target resource unit from the current residual resource unit to update the current residual resource unit.
2. The method of claim 1, wherein the server includes a mapping list storing mapping relationships between account information and resource credits;
the obtaining the current residual resource unit of the target account according to the account information of the target account includes:
and determining the matched resource amount in the mapping list based on the account information of the target account, and determining the determined resource amount as the current residual resource amount of the target account.
3. The method of claim 2, wherein deducting the target resource credit from the current remaining resource credit to update the current remaining resource credit comprises:
and deducting the current residual resource limit of the target account stored in the mapping list from the target resource limit, and updating the current residual resource limit of the target account in the mapping list.
4. The method of claim 1, wherein generating the planar point cloud representation of the target space in response to the current remaining resource amount satisfying a trigger condition for a target resource amount required to generate the planar point cloud representation of the target space comprises:
And generating a plane point cloud display diagram of the target space based on the target panorama of the target space in response to the triggering condition that the current residual resource limit is greater than or equal to the target resource limit.
5. The method according to claim 1, wherein the method further comprises:
receiving a resource allocation request sent by the client through the target account, wherein the resource allocation request carries a first resource quota;
and responding to the resource configuration request, and updating the current residual resource limit of the target account according to the first resource limit.
6. The method of claim 1, wherein the generating a planar point cloud representation of the target space comprises:
acquiring the depth image and the panoramic segmentation image based on a target panoramic image of the target space;
according to the depth image and the panoramic segmentation image, three-dimensional coordinates corresponding to each target pixel point in a target set are obtained, wherein the target set comprises at least part of pixel points in the target panoramic image;
and based on the three-dimensional coordinates of part of target pixel points in the target set, projecting the part of target pixel points carrying color characteristic values to obtain a plane point cloud display diagram corresponding to the target space, wherein the part of target pixel points are pixel points meeting projection requirements.
7. The method of claim 6, wherein the acquiring the depth image and the panoramic segmented image based on the target panorama of the target space comprises:
predicting a depth value of a pixel point in the target panoramic image based on a depth image model, and acquiring the depth image;
and carrying out category prediction on the pixel points in the target panoramic image based on a semantic segmentation model, and obtaining the panoramic segmentation image carrying category labels corresponding to the pixel points in the target panoramic image.
8. The method of claim 6, wherein the obtaining three-dimensional coordinates corresponding to each target pixel point in the target set according to the depth image and the panoramic segmented image includes:
determining all pixel points corresponding to a first class label and all pixel points corresponding to a second class label in the target panoramic image according to the panoramic segmented image, wherein the panoramic segmented image comprises class labels respectively corresponding to all pixel points in the target panoramic image, the pixel points corresponding to the first class label are located in a first area, and the pixel points corresponding to the second class label are located in a second area;
Determining a first coordinate set comprising three-dimensional coordinates corresponding to the pixel points of the first area and a second coordinate set comprising three-dimensional coordinates corresponding to the pixel points of the second area based on the depth image;
according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least part of pixel points located in a third area of the target panoramic image is obtained, and a fourth coordinate set corresponding to at least part of pixel points located in a fourth area of the target panoramic image is obtained;
the pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points;
the first area, the second area and the third area are respectively a top area, a ground area and a wall area, which correspond to the target space on the target panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the target panoramic image.
9. The method of claim 8, wherein the determining, based on the depth image, a first set of coordinates including three-dimensional coordinates corresponding to pixels of the first region, a second set of coordinates including three-dimensional coordinates corresponding to pixels of the second region, comprises:
Acquiring a first height value corresponding to each pixel point in the first area and a second height value corresponding to each pixel point in the second area based on the depth image, wherein the height value corresponding to each pixel point is a vertical component corresponding to a reference three-dimensional coordinate determined by the pixel point based on the depth image in the height direction;
determining a first average height value according to the first height values respectively corresponding to the pixel points in the first area;
determining a second average height value according to second height values respectively corresponding to the pixel points in the second area;
according to the first average height value, panoramic pixel coordinates corresponding to each pixel point in the first area and a conversion formula, determining three-dimensional coordinates corresponding to each pixel point in the first area, and determining the first coordinate set based on the three-dimensional coordinates corresponding to each pixel point in the first area;
determining three-dimensional coordinates corresponding to each pixel point in the second area according to the second average height value, the panoramic pixel coordinates corresponding to each pixel point in the second area and a conversion formula, and determining the second coordinate set based on the three-dimensional coordinates corresponding to each pixel point in the second area;
The conversion formula is used for converting panoramic pixel coordinates into three-dimensional coordinates.
10. The method of claim 8, wherein the obtaining a third set of coordinates corresponding to at least a portion of pixels located in a third region of the target panorama and a fourth set of coordinates corresponding to at least a portion of pixels located in a fourth region of the target panorama according to at least one of the first set of coordinates and the second set of coordinates comprises:
for each pixel point in the third area and the fourth area, searching a first pixel point which is associated with the current pixel point in the column direction and is positioned in the first area or the second area, wherein the first pixel point is intersected with a second pixel point corresponding to the current pixel point, and the second pixel point is in the same column with the current pixel point and is intersected with the first area or the second area;
under the condition that the first pixel point is found, based on the first coordinate set or the second coordinate set, acquiring a three-dimensional coordinate corresponding to the first pixel point;
according to the three-dimensional coordinates corresponding to the first pixel points, a first distance between a projection point corresponding to a virtual camera in a three-dimensional live-action space model and a first pixel point associated with the current pixel point in the column direction is obtained, wherein the three-dimensional live-action space model is a three-dimensional space model corresponding to the target space, and in the three-dimensional live-action space model, a connecting line of the projection point and the first pixel point is perpendicular to the column direction of the current pixel point;
Determining a depth value of the current pixel point based on the first distance and the panorama latitude angle corresponding to the current pixel point, and determining a three-dimensional coordinate of the current pixel point based on the depth value of the current pixel point;
determining the third coordinate set according to the three-dimensional coordinates corresponding to at least part of the pixel points in the third region, and determining the fourth coordinate set according to the three-dimensional coordinates corresponding to at least part of the pixel points in the fourth region;
wherein the pixel points in the third region correspond to a third category label; the pixel points in the fourth region correspond to at least one fourth category label.
11. The method of claim 10, wherein determining the depth value of the current pixel based on the first distance and the panorama latitude angle corresponding to the current pixel comprises:
determining the distance between the virtual camera and the current pixel point in the three-dimensional live-action space model based on the ratio of the first distance to the cosine value of the latitude angle of the panoramic image corresponding to the current pixel point;
and determining the distance between the virtual camera and the current pixel point as the depth value of the current pixel point.
12. The method according to claim 8, wherein the projecting the partial target pixel points carrying color feature values based on the three-dimensional coordinates of the partial target pixel points in the target set to obtain a planar point cloud display diagram corresponding to the target space includes:
Screening out partial target pixel points meeting projection requirements from the target set based on a preset rule;
based on the three-dimensional coordinates of the partial target pixel points, projecting the partial target pixel points carrying color characteristic values to a preset plane, and obtaining a plane point cloud display diagram;
and determining the color characteristic value carried by the target pixel point based on the target panoramic image, wherein the pixel point positioned in the first area does not meet the projection requirement.
13. A resource allocation apparatus, for application to a server, the apparatus comprising:
the device comprises a first receiving module, a second receiving module and a third receiving module, wherein the first receiving module is used for receiving a resource request which is sent by a client and used for generating a planar point cloud display diagram of a target space, the resource request is sent by a target account under a target application program on the client, the resource request comprises account information of the target account, the planar point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a target panorama of the target space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panorama segmentation image, and the depth image and the panorama segmentation image are generated based on the target panorama;
The acquisition module is used for acquiring the current residual resource limit of the target account according to the account information of the target account;
and the processing module is used for responding to the current residual resource unit to meet the triggering condition of the target resource unit required by generating the plane point cloud display diagram of the target space, generating the plane point cloud display diagram of the target space and deducting the target resource unit from the current residual resource unit to update the current residual resource unit.
14. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor performs the steps of the resource allocation method according to any one of claims 1 to 12.
15. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the resource allocation method according to any of claims 1 to 12.
CN202310378033.9A 2023-04-10 2023-04-10 Resource allocation method, device, electronic equipment and storage medium Pending CN116542659A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310378033.9A CN116542659A (en) 2023-04-10 2023-04-10 Resource allocation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310378033.9A CN116542659A (en) 2023-04-10 2023-04-10 Resource allocation method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116542659A true CN116542659A (en) 2023-08-04

Family

ID=87442551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310378033.9A Pending CN116542659A (en) 2023-04-10 2023-04-10 Resource allocation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116542659A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7003136B1 (en) * 2002-04-26 2006-02-21 Hewlett-Packard Development Company, L.P. Plan-view projections of depth image data for object tracking
WO2015162799A1 (en) * 2014-04-25 2015-10-29 楽天株式会社 Management device, management method, and program
US20190180279A1 (en) * 2017-12-11 2019-06-13 Mastercard International Incorporated Method and system for refund management with ongoing installments
CN111798302A (en) * 2020-06-29 2020-10-20 平安普惠企业管理有限公司 Quota updating method and device based on micro service, electronic equipment and storage medium
CN112288565A (en) * 2020-10-12 2021-01-29 北京三快在线科技有限公司 System, method and device for executing service
CN112733206A (en) * 2021-01-21 2021-04-30 深圳市轱辘车联数据技术有限公司 Resource allocation method, device, server and medium
US20220108528A1 (en) * 2019-07-29 2022-04-07 Zhejiang Sensetime Technology Development Co.,Ltd. Information processing method and device, positioning method and device, electronic device and storage medium
CN114629732A (en) * 2020-12-11 2022-06-14 北京金山云网络技术有限公司 Charging method and device for cloud resources, electronic equipment and medium
US20220198750A1 (en) * 2019-04-12 2022-06-23 Beijing Chengshi Wanglin Information Technology Co., Ltd. Three-dimensional object modeling method, image processing method, image processing device
CN115082154A (en) * 2022-06-24 2022-09-20 拉扎斯网络科技(上海)有限公司 Order processing method, system and device and electronic equipment
CN115393467A (en) * 2022-08-19 2022-11-25 北京城市网邻信息技术有限公司 House type graph generation method, device, equipment and medium
CN115439576A (en) * 2022-08-19 2022-12-06 北京城市网邻信息技术有限公司 Method, device, equipment and medium for generating house type graph of terminal equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7003136B1 (en) * 2002-04-26 2006-02-21 Hewlett-Packard Development Company, L.P. Plan-view projections of depth image data for object tracking
WO2015162799A1 (en) * 2014-04-25 2015-10-29 楽天株式会社 Management device, management method, and program
US20190180279A1 (en) * 2017-12-11 2019-06-13 Mastercard International Incorporated Method and system for refund management with ongoing installments
US20220198750A1 (en) * 2019-04-12 2022-06-23 Beijing Chengshi Wanglin Information Technology Co., Ltd. Three-dimensional object modeling method, image processing method, image processing device
US20220108528A1 (en) * 2019-07-29 2022-04-07 Zhejiang Sensetime Technology Development Co.,Ltd. Information processing method and device, positioning method and device, electronic device and storage medium
CN111798302A (en) * 2020-06-29 2020-10-20 平安普惠企业管理有限公司 Quota updating method and device based on micro service, electronic equipment and storage medium
CN112288565A (en) * 2020-10-12 2021-01-29 北京三快在线科技有限公司 System, method and device for executing service
CN114629732A (en) * 2020-12-11 2022-06-14 北京金山云网络技术有限公司 Charging method and device for cloud resources, electronic equipment and medium
CN112733206A (en) * 2021-01-21 2021-04-30 深圳市轱辘车联数据技术有限公司 Resource allocation method, device, server and medium
CN115082154A (en) * 2022-06-24 2022-09-20 拉扎斯网络科技(上海)有限公司 Order processing method, system and device and electronic equipment
CN115393467A (en) * 2022-08-19 2022-11-25 北京城市网邻信息技术有限公司 House type graph generation method, device, equipment and medium
CN115439576A (en) * 2022-08-19 2022-12-06 北京城市网邻信息技术有限公司 Method, device, equipment and medium for generating house type graph of terminal equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陶志鹏;陈志国;王英;吴冰冰;程思琪;: "海量三维地形数据的实时可视化研究", 科技创新与应用, no. 30, 28 October 2013 (2013-10-28), pages 28 - 29 *

Similar Documents

Publication Publication Date Title
US10521971B2 (en) Method and apparatus for marking and displaying spatial size in virtual three-dimensional house model
CN107193372B (en) Projection method from multiple rectangular planes at arbitrary positions to variable projection center
CN108898516B (en) Method, server and terminal for entering between functions in virtual three-dimensional room speaking mode
CN107993282B (en) Dynamic measurable live-action map making method
KR101260132B1 (en) Stereo matching process device, stereo matching process method, and recording medium
KR20060113514A (en) Image processing apparatus, image processing method, and program and recording medium used therewith
CN108897468A (en) A kind of method and system of the virtual three-dimensional space panorama into the source of houses
CN103004187A (en) Multiple-site drawn-image sharing apparatus, multiple-site drawn-image sharing system, method executed by multiple-site drawn-image sharing apparatus, program, and recording medium
EP3547083A1 (en) Information processing program, information processing method, and information processing system
CN104575096A (en) Parking space sharing method and system based on crowdsourcing mode
CN108874471B (en) Method and system for adding additional elements between functions of house resources
JP2020119156A (en) Avatar creating system, avatar creating device, server device, avatar creating method and program
CN108230434B (en) Image texture processing method and device, storage medium and electronic device
CN111225287A (en) Bullet screen processing method and device, electronic equipment and storage medium
KR101593123B1 (en) Public infromation virtual reality system and method thereby
CN114449303A (en) Live broadcast picture generation method and device, storage medium and electronic device
CN107507255B (en) Picture compression quality factor obtaining method, system, equipment and storage medium
CN116542659A (en) Resource allocation method, device, electronic equipment and storage medium
CN107025680B (en) Map rendering method and device
CN116527663B (en) Information processing method, information processing device, electronic equipment and storage medium
CN116485634B (en) Point cloud display diagram generation method and device, electronic equipment and storage medium
CN110211221A (en) Electronic three-dimensional map data creation method
JP2006318015A (en) Image processing device, image processing method, image display system, and program
CN116485633A (en) Point cloud display diagram generation method and device, electronic equipment and storage medium
CN116527663A (en) Information processing method, information processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination