CN115097975A - Method, apparatus, device and storage medium for controlling view angle conversion - Google Patents

Method, apparatus, device and storage medium for controlling view angle conversion Download PDF

Info

Publication number
CN115097975A
CN115097975A CN202210806929.8A CN202210806929A CN115097975A CN 115097975 A CN115097975 A CN 115097975A CN 202210806929 A CN202210806929 A CN 202210806929A CN 115097975 A CN115097975 A CN 115097975A
Authority
CN
China
Prior art keywords
target
field
view
location
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210806929.8A
Other languages
Chinese (zh)
Inventor
栾鑫月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202210806929.8A priority Critical patent/CN115097975A/en
Publication of CN115097975A publication Critical patent/CN115097975A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Abstract

According to an embodiment of the present disclosure, a method, an apparatus, a device, and a storage medium for controlling view angle conversion are provided. The method for controlling view angle conversion includes: acquiring at least one parameter associated with a perspective transformation of a target space; in response to detecting a selection of a first location in a currently presented first scene of the target space, determining a target field of view region based on the first location and the at least one parameter; and rendering a second scene of the target space based on a target transition point associated with the target field of view region. In this way, the roaming view angle of the scene in the target space can be controlled, so that personalized roaming requirements for the target space are realized, and the user experience is further improved.

Description

Method, apparatus, device and storage medium for controlling view angle conversion
Technical Field
Example embodiments of the present disclosure generally relate to the field of computers, and more particularly, to a method, apparatus, device, and computer-readable storage medium for controlling perspective conversion.
Background
Currently, Virtual Reality (VR) technology is widely used in related industries such as buildings and real estate. Through VR technology, a three-dimensional model for showing information of spatial layout, shape, area, etc. of a building may be provided to a user, which may serve as a basis for developing various services. For example, in the house rental and sale industry, it is desirable to present a three-dimensional model of a house to be rented or sold so as to enable a user to have an all-round online view of a house of interest.
Disclosure of Invention
In a first aspect of the disclosure, a method for controlling a perspective conversion is provided. The method includes obtaining at least one parameter associated with a perspective transformation of a target space. The method further comprises, in response to detecting a selection of a first location in a currently presented first scene of the target space, determining a target field of view area based on the first location and the at least one parameter and presenting a second scene of the target space based on a target transition point associated with the target field of view area.
In a second aspect of the present disclosure, an apparatus for controlling a viewing angle conversion is provided. The apparatus includes a parameter acquisition module configured to acquire at least one parameter associated with a perspective conversion of a target space; a field-of-view region determination module configured to determine a target field-of-view region based on a first location in a currently presented first scene of the target space and the at least one parameter in response to detecting a selection of the first location; and a scene rendering module configured to render a second scene of the target space based on a target transition point associated with the target field of view region.
In a third aspect of the disclosure, an electronic device is provided. The apparatus comprises at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit. The instructions, when executed by the at least one processing unit, cause the apparatus to perform the method of the first aspect.
In a fourth aspect of the disclosure, a computer-readable storage medium is provided. The medium has stored thereon a computer program which, when executed by a processor, performs the method of the first aspect.
It should be understood that the statements herein set forth in this summary are not intended to limit the essential or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented;
FIG. 2 illustrates a flow diagram of a process for controlling perspective conversion according to some embodiments of the present disclosure;
FIG. 3 illustrates a schematic diagram of an operator interface for inputting parameters for controlling perspective transitions, in accordance with some embodiments of the present disclosure;
fig. 4A and 4B illustrate schematic views of perspective transitions according to some embodiments of the present disclosure;
fig. 5 illustrates a block diagram of an apparatus for controlling perspective conversion according to some embodiments of the present disclosure; and
FIG. 6 illustrates a block diagram of a device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions are also possible below.
As described above, in the related industries such as buildings and real estate, a three-dimensional model for showing information on the spatial layout, shape, area, and the like of a building can be provided to a user using VR technology. For example, in the house renting and selling industry, a user may be presented with a three-dimensional model of a house to be rented or sold so that the user may perform panoramic roaming of the house online.
Panoramic roaming of a house involves different viewing angles of the house. In the process that a user carries out panoramic roaming on a house through VR, multiple times of visual angle conversion can be carried out. For example, in a current scene of a house presented to a user, if the user clicks a certain position on a display device used to present the current scene, the perspective presented to the user may jump.
Because the number of points which may trigger browsing view angles in the panoramic roaming of a house is large, the view angle conversion in the panoramic roaming cannot be controlled at present. The user may switch the viewing angle to a viewing angle that is a very far distance from the current scene after clicking a certain location of the display device, or may not be able to switch the viewing angle because there is no point at that location that can trigger the viewing angle switch, which causes a degradation in the user experience.
According to various embodiments of the present disclosure, a scheme of controlling view angle conversion is proposed. For example, the electronic device may acquire at least one parameter related to the perspective transformation. The electronic device may determine a target field of view region in the current scene of the target space in which a transition point for triggering a perspective transition may exist based on the at least one parameter and the determined one position in the current scene. According to the transition point in the target view field region, it is possible to switch from a current scene presenting the target space to another scene presenting the target space corresponding to the switching point.
According to the realization of the method and the device, the browsing visual angle of the scene in the target space can be controlled, so that personalized browsing requirements are realized, and the user experience is further improved.
Example Environment
Referring initially to FIG. 1, a schematic diagram of an example environment 100 is schematically illustrated in which an example implementation according to the present disclosure may be implemented.
In the example environment 100, the acquisition device 140 may acquire images relating to a target space in the building 130. In some embodiments, the capture device 140 may be implemented as a panoramic camera. In some other embodiments, the acquisition device 140 may also be implemented as other devices capable of performing the capturing of images of the target space. Acquisition device 140 may acquire images of the target space at different location points in one or more buildings 130.
Optionally, the capture device 140 may be connected to the electronic device 110 or integrated within the electronic device 110. The capture device 140 may transmit the captured image to the electronic device 110.
The electronic device 110 is installed with an application 120. In some embodiments, the application 150 may be an application associated with a house rental sale. In some other embodiments, the application 150 may also relate to other services related to the construction industry. The application 150 may implement processing for images acquired in the building 130 and provide a three-dimensional model of a house based on the processed images. User 102 may interact with application 150 via electronic device 110 and/or an attached device of electronic device 110.
By way of example, the application 120 may generate a three-dimensional model of a target space in the building 130 based on images captured by the capture device 140 and, in turn, provide the user 102 with panoramic navigation of the target space in the building 130. In panoramic navigation of the target space by the user 102, the application 120 may present the user 102, e.g., via a display device (not shown), with a scene graph 150 of different perspectives of the target space. The user 102 may perform the transformation of different view angles of the target space during the panoramic roaming process.
In some embodiments, the electronic device 110 may be any type of mobile terminal, fixed terminal, or portable terminal including a mobile handset, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, media computer, multimedia tablet, Personal Communication System (PCS) device, personal navigation device, Personal Digital Assistant (PDA), audio/video player, digital camera/camcorder, positioning device, television receiver, radio broadcast receiver, electronic book device, game device, or any combination of the preceding, including accessories and peripherals of these devices, or any combination thereof. In some embodiments, the electronic device 110 can also support any type of interface to the user (such as "wearable" circuitry, etc.). The remote device 120 may be, for example, various types of computing systems/servers capable of providing computing capabilities, including but not limited to mainframes, edge computing nodes, computing devices in a cloud environment, and so forth.
It should be understood that the description of the structure and function of environment 100 is for exemplary purposes only and does not imply any limitation as to the scope of the disclosure.
View angle conversion process
Fig. 2 illustrates a flow diagram of a process 200 for controlling perspective conversion according to some embodiments of the present disclosure. Process 200 may be implemented at electronic device 110. For ease of discussion, the process 200 will be described with reference to the environment 100 of FIG. 1.
As shown in fig. 2, at block 210, the electronic device 110 obtains at least one parameter associated with a perspective transformation of a target space of the building 130.
Fig. 3 illustrates a schematic diagram of an operator interface for inputting parameters for controlling perspective transitions, in accordance with some embodiments of the present disclosure. As shown in fig. 3, in some embodiments, the electronic device 110, e.g., via the application 120, may present a user interface element 300 for controlling perspective conversion and receive an input value for at least one parameter via the user interface element 300.
An input window 310 for inputting a range of hot zone filters may be included in the user interface element 300, for example. In the present disclosure, the term "hotspot" may relate to a planar region associated with a position point selected by a user in a scene of a presented target space in a three-dimensional coordinate system of the scene, a value of the hotspot may be input through the input window 310, and the value of the hotspot may be regarded as a length value in the coordinate system of the scene of the target space. The range of values that the hot zone range filter may allow for entry may be indicated at the input window 310. For example 0-2 (meters). The user 102 enters arbitrary values through the input window 310 to define the range of the hotspot within the range of the interval.
An input window 320 for inputting the range of the angle filtering may also be included in the user interface element 300, for example. In the present disclosure, the term "angle" may relate to a field angle in a three-dimensional coordinate system of a scene of the presented target space from a position point of a camera associated with the acquisition of the scene towards a direction of one position point selected by the user in the scene, wherein a line connecting the camera position point and the position point selected by the user may be regarded as a bisector. The value of the included angle may be entered through the input window 320. An angle range may be indicated at the input window 320 to filter the range of values that may be allowed to be input. For example, 0-75 degrees. The user 102 defines the angle by entering an arbitrary value through the input window 320 within the range of the interval.
It should be understood that the viewing angle transition may be controlled based on other parameters besides the hot zone range and the angle selection. For example, a distance between a transition point triggering a view transition and a location point selected by the user under the current scene may be defined. In the case where there are a plurality of transition points that can trigger a view angle transition, an appropriate transition point can be selected by the distance. This process will be further explained below.
Referring back to fig. 2, at block 220, if the electronic device 110 detects a selection of a first location in a currently presented first scene of the target space, at block 230, the electronic device 11 determines a target field of view region based on the first location and at least one parameter.
Fig. 4A illustrates a currently presented first scene 410 of a target space of a building 130. The user interface element 300 for controlling perspective transition may be presented simultaneously with the first scene 410.
In some embodiments, if the electronic device 110 detects that the user 102 selects the location 401 in the currently presented first scene 410, for example, the user 102 clicks the location 401 with a mouse or touches the location 401 with a finger, the electronic device 110 may, for example, obtain the hot zone filtering parameters input by the user to determine the target field of view area 402, i.e., the hot zone range. In fig. 4A, the range of the target field of view region 402 may be a circular region with a radius 405 around the position 401 selected by the user 102 and the input hot zone filtering parameters in the three-dimensional coordinate system of the presented first scene 401. It should be appreciated that the circular region shown in fig. 4A may be used as an example of the determined target field of view region. The target field of view region 402 determined based on the location 401 selected by the user 102 and the hotspot screening parameters may also be a region having other shapes, such as a square, a regular triangle, or other regular polygon.
In some embodiments, the electronic device 110 may also obtain, for example, user-entered angle parameters to determine the target field of view area 404. In fig. 4A, the extent of the target field of view region 404 may be a cone region determined based on a positional relationship between the position 401 selected by the user 102 and the position 403 of the camera associated with the scene image acquisition of the first scene 401 and the user-input angle in the three-dimensional coordinate system of the presented first scene 401. The angle 406 may be the apex angle of the cone, the distance between position 401 and position 403 may be the height of the cone, and the plane determined with reference to position 401 may be the base of the cone. It should be appreciated that the cone region shown in FIG. 4A may be used as an example of the determined target field of view region. The target field of view region 404 determined based on the angle parameters may also be any other spatial geometry.
It should be appreciated that the user 102 may be given one or multiple parameters for controlling the perspective transition at the same time, and the electronic device 110 may determine the different target field of view regions 402 and/or 404 based on one or more of the parameters.
In this way, the determination of the target field of view area including the transition point can be realized according to the user requirement, so that a personalized panoramic roaming process can be realized, and the user experience is further improved.
Alternatively or additionally, in order to be able to visually present the target field of view determined according to the parameters for controlling the view angle transition and the position in the current scene selected by the user, the target field of view may be marked explicitly on the currently presented first scene. For example, in the manner of grid-line filling as presented in fig. 4A. Additionally, whether to explicitly label the target field of view region on the currently presented first scene may also be implemented in accordance with a selection input by the user at the user interface element 300. Thereby, it is possible to show the user in a more visual way which areas the device may switch in view depending on which switching points within.
Alternatively or additionally, an information cue regarding the switching of the viewing angle may also be presented on the currently presented first scene. For example, when the electronic device 110 detects a selection for a location in the currently presented first scene and determines a target field of view region, the electronic device 110 will present cues of view switching that may occur, such as whether or not view transition occurs, whether view jump involves a change in camera orientation, and so forth. In this way, the user can be more intuitively informed of the trend of the next view angle conversion.
At block 240, the electronic device 110 may render a second scene of the target space based on the target transition point associated with the target field of view region.
In some embodiments, multiple transition points may be included within the target field of view determined by electronic device 110. In this case, the electronic device 110 may obtain information associated with the selection of the transition point. In some embodiments, this information may also be input, for example, as a parameter for controlling the perspective conversion. The information may, for example, indicate that the transition point closest to the user 102 at the selected location 401 in the first scenario 410, or the transition point furthest from the user 102 at the selected location 401 in the first scenario 410, is selected, etc. In other embodiments, the information may also indicate, for example, a predetermined distance in the coordinate system of the scene of the target space.
With the acquired information associated with the selection of the transition point, the electronic device 110 may select a target transition point from a plurality of candidate transition points included within the target view area. For example, the electronic device 110 may calculate respective distances between a plurality of candidate transition points within the target view field region and the user 102 selected location 401 in the first scene 410. A target transition point is selected from a plurality of candidate transition points based on a comparison between the distances or based on a comparison between the distances and a given predetermined distance.
As illustrated in fig. 4B, after determining the target transition point, the electronic device 110 may transition from the currently presented first scene 410 to the second scene 420 of the presentation target space.
It is also possible that no transition point capable of triggering a view jump is present within the target view region determined by the electronic device 110, in which case the electronic device 110 may select a transition point adjacent to the position 403 of the camera associated with the scene image acquisition of the first scene 401.
Alternatively or additionally, if there is no transition point capable of triggering view jump in the target view field region determined by the electronic device 110, the electronic device 110 may also present a prompt that there is no candidate transition point in the target view field region or a prompt that recommends the user to update parameters for controlling view transition.
By the scheme, the roaming visual angle of the scene in the target space can be controlled, so that personalized roaming requirements for the target space are met, and user experience is further improved.
Example apparatus and devices
Embodiments of the present disclosure also provide corresponding apparatuses for implementing the above methods or processes. Fig. 5 illustrates a schematic block diagram of an apparatus 500 for controlling perspective conversion according to some embodiments of the present disclosure.
As shown in fig. 5, the apparatus 500 may include a parameter acquisition module 510 configured to acquire at least one parameter associated with a perspective conversion of a target space. The apparatus may further comprise a field of view region determination module 520 configured to determine a target field of view region based on a first location in a currently presented first scene of the target space and the at least one parameter in response to detecting a selection of the first location. The apparatus 500 may further comprise a scene rendering module 530 configured to render a second scene of the target space based on the target transition point associated with the target field of view region.
In some embodiments, the parameter acquisition module 510 may be further configured to display a user interface element for controlling perspective transformation while presenting the target space; and receiving an input value for the at least one parameter via the user interface element.
In some embodiments, the parameter acquisition module 510 may be further configured to receive an input specifying a first length in a coordinate system of the target space; and the field of view region determination module 520 may be further configured to determine the extent of the target field of view region based on the first length with the first position as a reference point.
In some embodiments, the parameter acquisition module 510 may be further configured to receive an input of a field angle in a direction from the second position of the camera toward the first position in the coordinate system of the target space; and the field of view region determining module 520 may be further configured to determine the range of the target field of view region based on the field angle with the first position and the second position as reference points.
In some embodiments, the target field of view region is a geometric volume of space having the second position as a vertex, a distance between the second position and the first position as a height, and the field angle as a vertex angle.
In some embodiments, the apparatus 500 may be further configured to calculate respective distances between a plurality of candidate transition points within the target view region and the first location; and selecting a candidate transition point having the largest or smallest distance from the first position as the target transition point.
In some embodiments, the apparatus 500 may be further configured to determine a transition point adjacent to a camera in a coordinate system of the target space as the target transition point in response to determining that no candidate transition point exists within the target view field region.
In some embodiments, the apparatus 500 may be further configured to, in response to determining that no candidate transition point exists within the target view region, present at least one of: and recommending a prompt for updating the at least one parameter when no prompt for a candidate transition point exists in the target view field region.
The elements included in apparatus 500 may be implemented in a variety of ways including software, hardware, firmware, or any combination thereof. In some embodiments, one or more of the units may be implemented using software and/or firmware, such as machine executable instructions stored on a storage medium. In addition to, or in the alternative to, machine-executable instructions, some or all of the elements in apparatus 500 may be implemented at least in part by one or more hardware logic components. By way of example, and not limitation, exemplary types of hardware logic components that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standards (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and so forth.
Fig. 6 illustrates a block diagram of a computing device/server 600 in which one or more embodiments of the present disclosure may be implemented. It should be understood that the computing device/server 600 illustrated in fig. 6 is merely exemplary and should not be construed as limiting the functionality or scope of the embodiments described herein in any way.
As shown in fig. 6, computing device/server 600 is in the form of a general purpose computing device. Components of computing device/server 600 may include, but are not limited to, one or more processors or processing units 610, memory 620, storage 630, one or more communication units 640, one or more input devices 660, and one or more output devices 660. The processing unit 610 may be a real or virtual processor and can perform various processes according to programs stored in the memory 620. In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capability of computing device/server 600.
Computing device/server 600 typically includes a number of computer storage media. Such media may be any available media that is accessible by computing device/server 600 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. Memory 620 may be volatile memory (e.g., registers, cache, Random Access Memory (RAM)), non-volatile memory (e.g., Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory), or some combination thereof. Storage 630 may be a removable or non-removable medium and may include a machine-readable medium, such as a flash drive, a magnetic disk, or any other medium that may be capable of being used to store information and/or data (e.g., training data for training) and that may be accessed within computing device/server 600.
Computing device/server 600 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data media interfaces. Memory 620 may include a computer program product 625 having one or more program modules configured to perform the various methods or acts of the various embodiments of the disclosure.
The communication unit 640 enables communication with other computing devices over a communication medium. Additionally, the functionality of the components of computing device/server 600 may be implemented in a single computing cluster or multiple computing machines capable of communicating over a communications connection. Thus, computing device/server 600 may operate in a networked environment using logical connections to one or more other servers, network Personal Computers (PCs), or another network node.
The input device 650 may be one or more input devices such as a mouse, keyboard, trackball, or the like. Output device 660 may be one or more output devices such as a display, speakers, printer, or the like. Computing device/server 600 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., as desired, through communication unit 640, with one or more devices that enable a user to interact with computing device/server 600, or with any device (e.g., network card, modem, etc.) that enables computing device/server 600 to communicate with one or more other computing devices. Such communication may be performed via input/output (I/O) interfaces (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium is provided, on which one or more computer instructions are stored, wherein the one or more computer instructions are executed by a processor to implement the above-described method.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products implemented in accordance with the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a general purpose computer, special purpose computer, or other processing unit that is programmable to control a perspective conversion apparatus, such that the instructions, which execute via the computer or other processing unit that is programmable to control the perspective conversion apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, a programmable apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer-readable program instructions may also be loaded onto a computer, other programmable viewing angle conversion apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable viewing angle conversion apparatus, or other device to produce a computer implemented process such that the instructions which execute on the computer, other programmable viewing angle conversion apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing has described implementations of the present disclosure, and the above description is illustrative, not exhaustive, and is not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The terminology used herein was chosen in order to best explain the principles of implementations, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the implementations disclosed herein.

Claims (18)

1. A method for controlling perspective conversion, comprising:
acquiring at least one parameter associated with a perspective transformation of a target space;
in response to detecting a selection of a first location in a currently presented first scene of the target space, determining a target field of view region based on the first location and the at least one parameter; and
rendering a second scene of the target space based on a target transition point associated with the target field of view region.
2. The method of claim 1, wherein obtaining the at least one parameter comprises:
displaying a user interface element for controlling perspective conversion while presenting the target space; and
receiving, via the user interface element, an input value for the at least one parameter.
3. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
wherein obtaining the at least one parameter comprises: receiving an input specifying a first length in a coordinate system of the target space;
and wherein determining the target field of view region comprises: determining a range of the target field of view region based on the first length with the first position as a reference point.
4. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
wherein obtaining the at least one parameter comprises: receiving an input of an angle of view in a direction from a second position of the camera toward the first position in a coordinate system of the target space;
and wherein determining the target field of view region comprises: and determining the range of the target field area based on the field angle by taking the first position and the second position as reference points.
5. The method of claim 4, wherein the target field of view region is a geometric volume of space having the second location as a vertex, a distance between the second location and the first location as a height, and a vertex angle at the field angle.
6. The method of claim 1, further comprising determining the target transition point as follows:
calculating respective distances between a plurality of candidate transition points within the target view field region and the first location; and
selecting a candidate transition point having the largest or smallest distance from the first location as the target transition point.
7. The method of claim 1, further comprising:
in response to determining that no candidate transition point exists within the target view field region, determining a transition point adjacent to a camera in a coordinate system of the target space as the target transition point.
8. The method of claim 1, further comprising:
in response to determining that no candidate transition points exist within the target view field region, presenting at least one of:
no hint of candidate transition points exists within the target view field region,
a prompt is recommended to update the at least one parameter.
9. An apparatus for controlling a perspective conversion, comprising:
a parameter acquisition module configured to acquire at least one parameter associated with a perspective conversion of a target space;
a field of view region determination module configured to determine a target field of view region based on a first location in a currently presented first scene of the target space and the at least one parameter in response to detecting a selection of the first location; and
a scene rendering module configured to render a second scene of the target space based on a target transition point associated with the target field of view region.
10. The apparatus of claim 9, wherein the parameter acquisition module is further configured to:
displaying a user interface element for controlling perspective conversion while presenting the target space; and
receiving, via the user interface element, an input value for the at least one parameter.
11. The apparatus of claim 9, wherein
The parameter acquisition module is further configured to receive an input specifying a first length in a coordinate system of the target space; and
the field of view region determination module is further configured to determine a range of the target field of view region based on the first length with the first position as a reference point.
12. The apparatus of claim 9, wherein
The parameter acquisition module is further configured to receive an input of an angle of view in a direction from a second position of the camera toward the first position in a coordinate system of the target space; and
the field of view region determination module is further configured to determine a range of the target field of view region based on the field angle with the first position and the second position as reference points.
13. The device of claim 12, wherein the target field of view region is a geometric volume of space having the second location as a vertex, a distance between the second location and the first location as a height, and an apex angle at the field angle.
14. The apparatus of claim 9, further configured to:
calculating respective distances between a plurality of candidate transition points within the target view field region and the first location; and
selecting a candidate transition point having the largest or smallest distance from the first location as the target transition point.
15. The apparatus of claim 9, further configured to:
in response to determining that no candidate transition point exists within the target view field region, determining a transition point adjacent to a camera in a coordinate system of the target space as the target transition point.
16. The apparatus of claim 9, further configured to:
in response to determining that no candidate transition points exist within the target view field region, presenting at least one of:
no hint of candidate transition points exists within the target view field region,
a prompt is recommended to update the at least one parameter.
17. An electronic device, comprising:
at least one processing unit; and
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit causing the electronic device to perform the method of any of claims 1-8.
18. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202210806929.8A 2022-07-08 2022-07-08 Method, apparatus, device and storage medium for controlling view angle conversion Pending CN115097975A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210806929.8A CN115097975A (en) 2022-07-08 2022-07-08 Method, apparatus, device and storage medium for controlling view angle conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210806929.8A CN115097975A (en) 2022-07-08 2022-07-08 Method, apparatus, device and storage medium for controlling view angle conversion

Publications (1)

Publication Number Publication Date
CN115097975A true CN115097975A (en) 2022-09-23

Family

ID=83296176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210806929.8A Pending CN115097975A (en) 2022-07-08 2022-07-08 Method, apparatus, device and storage medium for controlling view angle conversion

Country Status (1)

Country Link
CN (1) CN115097975A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115857702A (en) * 2023-02-28 2023-03-28 北京国星创图科技有限公司 Scene roaming and view angle conversion method in space scene
WO2024066208A1 (en) * 2022-09-26 2024-04-04 如你所视(北京)科技有限公司 Method and apparatus for displaying panoramic image of point location outside model, and device and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729558A (en) * 2013-12-26 2014-04-16 北京像素软件科技股份有限公司 Scene change method
US9662564B1 (en) * 2013-03-11 2017-05-30 Google Inc. Systems and methods for generating three-dimensional image models using game-based image acquisition
CN112288885A (en) * 2020-11-20 2021-01-29 深圳智润新能源电力勘测设计院有限公司 Roaming positioning method based on three-dimensional earth application and related device
CN113996060A (en) * 2021-10-29 2022-02-01 腾讯科技(成都)有限公司 Display picture adjusting method and device, storage medium and electronic equipment
CN114387398A (en) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 Three-dimensional scene loading method, loading device, electronic equipment and readable storage medium
CN114511684A (en) * 2021-01-07 2022-05-17 深圳思为科技有限公司 Scene switching method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9662564B1 (en) * 2013-03-11 2017-05-30 Google Inc. Systems and methods for generating three-dimensional image models using game-based image acquisition
CN103729558A (en) * 2013-12-26 2014-04-16 北京像素软件科技股份有限公司 Scene change method
CN112288885A (en) * 2020-11-20 2021-01-29 深圳智润新能源电力勘测设计院有限公司 Roaming positioning method based on three-dimensional earth application and related device
CN114511684A (en) * 2021-01-07 2022-05-17 深圳思为科技有限公司 Scene switching method and device, electronic equipment and storage medium
CN113996060A (en) * 2021-10-29 2022-02-01 腾讯科技(成都)有限公司 Display picture adjusting method and device, storage medium and electronic equipment
CN114387398A (en) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 Three-dimensional scene loading method, loading device, electronic equipment and readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024066208A1 (en) * 2022-09-26 2024-04-04 如你所视(北京)科技有限公司 Method and apparatus for displaying panoramic image of point location outside model, and device and medium
CN115857702A (en) * 2023-02-28 2023-03-28 北京国星创图科技有限公司 Scene roaming and view angle conversion method in space scene
CN115857702B (en) * 2023-02-28 2024-02-02 北京国星创图科技有限公司 Scene roaming and visual angle conversion method under space scene

Similar Documents

Publication Publication Date Title
US11290651B2 (en) Image display system, information processing apparatus, image display method, image display program, image processing apparatus, image processing method, and image processing program
JP6807877B2 (en) Methods and terminals for locking targets in the game scene
CN115097975A (en) Method, apparatus, device and storage medium for controlling view angle conversion
CN107172346B (en) Virtualization method and mobile terminal
CN110400337B (en) Image processing method, image processing device, electronic equipment and storage medium
JP6877149B2 (en) Shooting position recommendation method, computer program and shooting position recommendation system
EP3537276B1 (en) User interface for orienting a camera view toward surfaces in a 3d map and devices incorporating the user interface
CN103412720A (en) Method and device for processing touch-control input signals
CN110084797B (en) Plane detection method, plane detection device, electronic equipment and storage medium
US20150154736A1 (en) Linking Together Scene Scans
CN103713849A (en) Method and device for image shooting and terminal device
CN115079921A (en) Method, device, equipment and storage medium for controlling loading of scene information
CN115097976B (en) Method, apparatus, device and storage medium for image processing
CN115344121A (en) Method, device, equipment and storage medium for processing gesture event
CN115115786A (en) Method, apparatus, device and storage medium for three-dimensional model generation
CN110827412A (en) Method, apparatus and computer-readable storage medium for adapting a plane
CN115100359A (en) Image processing method, device, equipment and storage medium
CN115730092A (en) Method, apparatus, device and storage medium for content presentation
CN111913635B (en) Three-dimensional panoramic picture display method and device, mobile terminal and storage medium
CN114693893A (en) Data processing method and device, electronic equipment and storage medium
CN109472873B (en) Three-dimensional model generation method, device and hardware device
CN114511684A (en) Scene switching method and device, electronic equipment and storage medium
CN110827413A (en) Method, apparatus and computer-readable storage medium for controlling a change in a virtual object form
CN117312477B (en) AR technology-based indoor intelligent exhibition positioning method, device, equipment and medium
US10895953B2 (en) Interaction with a three-dimensional internet content displayed on a user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 802, Information Building, 13 Linyin North Street, Pinggu District, Beijing, 101299

Applicant after: Beijing youzhuju Network Technology Co.,Ltd.

Address before: 101299 Room 802, information building, No. 13, linmeng North Street, Pinggu District, Beijing

Applicant before: Beijing youzhuju Network Technology Co.,Ltd.