CN115018967B - Image generation method, device, equipment and storage medium - Google Patents
Image generation method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN115018967B CN115018967B CN202210767384.4A CN202210767384A CN115018967B CN 115018967 B CN115018967 B CN 115018967B CN 202210767384 A CN202210767384 A CN 202210767384A CN 115018967 B CN115018967 B CN 115018967B
- Authority
- CN
- China
- Prior art keywords
- rendering
- data
- determining
- road side
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000009877 rendering Methods 0.000 claims abstract description 369
- 238000012216 screening Methods 0.000 claims abstract description 42
- 230000000007 visual effect Effects 0.000 claims description 50
- 230000001960 triggered effect Effects 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 5
- 238000012544 monitoring process Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 abstract description 16
- 238000004364 calculation method Methods 0.000 abstract description 9
- 230000000903 blocking effect Effects 0.000 abstract description 7
- 230000006870 function Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000008447 perception Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/535—Filtering based on additional data, e.g. user or group profiles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses an image generation method, an image generation device and a storage medium, wherein the method comprises the following steps: determining a current view angle, and determining a rendering range according to the current view angle; screening the road side data acquired from the road side equipment according to the rendering range, and determining the data to be rendered according to the screening result; and sending the data to be rendered to a rendering engine so that the rendering engine renders the data to be rendered to generate a rendering image. According to the technical scheme, the road side data are screened in the server according to the rendering range determined by the current view angle to obtain the data to be rendered, the data to be rendered are sent to the rendering engine, the rendering engine only renders the data to be rendered to obtain the rendering image, the data processing amount of the rendering engine is reduced, the generated rendering image only contains the target object, the problems of low rendering efficiency, rendering blocking and frequent calculation when the rendering image is displayed on the preset interface of the rendering engine are solved, and the user experience is further improved.
Description
Technical Field
Embodiments of the present invention relate to image processing technologies, and in particular, to an image generating method, apparatus, device, and storage medium.
Background
Three-dimensional rendering refers to the process of generating a two-dimensional image from a three-dimensional scene using a rendering engine in a computer. The three-dimensional scene can comprise a static object to be rendered and a dynamic object to be rendered, the static object to be rendered can comprise three-dimensional scenes which are collected, manufactured and released in advance, such as roads, road side facilities, buildings, floors, vegetation, water systems and the like, and the dynamic object to be rendered can comprise three-dimensional scenes which are changed in real time, such as vehicles, traffic participants, signal lamps and the like.
In the prior art, a rendering engine can receive road side data sent by road side equipment in real time through a cloud, calculate whether the road side data is in a current display range, render the road side data when the road side data is determined to be in the current display range, and not render the road side data when the road side data is determined not to be in the current display range.
In the prior art, whether all road side data are in the current display range or not needs to be calculated in real time by a rendering engine, the road side data in the current display range are rendered, the calculated amount is large, the rendering engine is easy to be blocked, the rendering frame rate is low, and the user experience is influenced.
Disclosure of Invention
The invention provides an image generation method, an image generation device and a storage medium, which are used for solving the problems of large calculated amount and rendering blocking when a rendering image is generated and improving the rendering efficiency and the rendering performance.
In a first aspect, an embodiment of the present invention provides an image generating method, including:
determining a current view angle, and determining a rendering range according to the current view angle;
screening the road side data acquired from the road side equipment according to the rendering range, and determining the data to be rendered according to the screening result;
and sending the data to be rendered to a rendering engine so that the rendering engine renders the data to be rendered to generate a rendering image.
The technical scheme of the embodiment of the invention provides an image generation method, which comprises the following steps: determining a current view angle, and determining a rendering range according to the current view angle; screening the road side data acquired from the road side equipment according to the rendering range, and determining the data to be rendered according to the screening result; and sending the data to be rendered to a rendering engine so that the rendering engine renders the data to be rendered to generate a rendering image. According to the technical scheme, the rendering range can be determined according to the current view angle, the rendering range can indicate the range of the object to be displayed in the rendering image under the current view angle, the server can receive the road side data sent by the road side equipment, the road side data can comprise the data sent to the server by all the road side equipment under the current view angle, if the road side data sent to the server by all the road side equipment are sent to the rendering engine, the rendering engine renders the road side data to generate the rendering image, the calculated amount of the rendering engine is large, the rendering image comprises objects which are not needed to be displayed, therefore, the road side data sent to the server by the road side equipment are required to be screened according to the rendering range, the road side data acquired by the road side equipment in the rendering range are determined to be the data to be rendered, the data to be rendered is reduced compared with the data of all the road side data, and the data to be rendered are further sent to the rendering engine, so that the rendering engine renders the data to be rendered, and the rendering image containing the object to be rendered is generated. The method comprises the steps that road side data acquired by road side equipment, namely target road side equipment, of a rendering range determined according to a current view angle are determined to be data to be rendered in a server, the data to be rendered are sent to a rendering engine, the rendering engine only renders the data to be rendered to obtain a rendering image, the data processing amount of the rendering engine is reduced, the generated rendering image only contains a target object, the problems of low rendering efficiency, rendering blocking and frequent calculation when the rendering image is displayed on a preset interface are solved, and user experience is further improved.
Further, determining the current viewing angle includes:
And monitoring a visual angle adjustment operation on a preset interface, and determining the current visual angle according to the visual angle adjustment operation.
Further, determining the current viewing angle according to the viewing angle adjustment operation includes:
The viewing angle before the viewing angle adjustment operation is a first viewing angle, the viewing angle after the viewing angle adjustment operation is a second viewing angle, and the second viewing angle height is determined based on the second viewing angle;
And if the height of the second visual angle is smaller than or equal to the preset height, determining the current visual angle according to the linear distance between the center point of the second visual angle and the center point of the first visual angle.
Further, determining the current viewing angle according to a linear distance between the center point of the second viewing angle and the center point of the first viewing angle includes:
if the linear distance is greater than a preset distance, determining the second viewing angle as the current viewing angle; otherwise, the first view angle is determined as the current view angle.
Further, determining a rendering range according to the current viewing angle includes:
Determining a current center point according to the current view angle;
And determining the current center point as a rendering center, and determining the rendering range according to the rendering center and a preset rendering side length or a preset rendering radius.
Further, the method includes screening the road side data acquired from the road side equipment according to the rendering range, and determining the data to be rendered according to the screening result, including:
determining target road side equipment according to the rendering range;
And screening the target road side data acquired by the target road side equipment from the road side data acquired based on the road side equipment, and determining the data to be rendered according to the target road side data.
Further, determining a target roadside device according to the rendering range, including:
determining a target acquisition position according to the rendering range, and determining the target road side equipment according to the target acquisition position; the target acquisition position is the acquisition position of which the position information is in the rendering range.
Further, determining a target roadside device according to the rendering range, including:
and determining a target equipment position according to the rendering range, and determining the target road side equipment according to the target equipment position.
Further, the roadside device includes an image acquisition device and a radar device, the target roadside data includes target image data and target radar data, and accordingly, the data to be rendered is determined according to the target roadside data, and the method further includes:
And carrying out data fusion on the target image data and the target radar data to obtain the data to be rendered.
In a second aspect, an embodiment of the present invention further provides an image generating apparatus, including:
The determining module is used for determining a current view angle and determining a rendering range according to the current view angle;
The screening module is used for screening the road side data acquired from the road side equipment according to the rendering range and determining the data to be rendered according to the screening result;
And the rendering module is used for sending the data to be rendered to a rendering engine so that the rendering engine renders the data to be rendered and generates a rendering image containing a perception target.
In a third aspect, an embodiment of the present invention further provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the image generation method according to any one of the first aspects when executing the program.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions for performing the image generation method of any of the first aspects when executed by a computer processor.
In a fifth aspect, the present application provides a computer program product comprising computer instructions which, when run on a computer, cause the computer to perform the image generation method as provided in the first aspect.
It should be noted that the above-mentioned computer instructions may be stored in whole or in part on a computer-readable storage medium. The computer readable storage medium may be packaged together with the processor of the image generating apparatus or may be packaged separately from the processor of the image generating apparatus, which is not limited in the present application.
The description of the second, third, fourth and fifth aspects of the present application may refer to the detailed description of the first aspect; also, the advantageous effects described in the second aspect, the third aspect, the fourth aspect, and the fifth aspect may refer to the advantageous effect analysis of the first aspect, and are not described herein.
In the present application, the names of the above-described image generating apparatuses do not constitute limitations on the devices or function modules themselves, and in actual implementations, these devices or function modules may appear under other names. Insofar as the function of each device or function module is similar to that of the present application, it falls within the scope of the claims of the present application and the equivalents thereof.
These and other aspects of the application will be more readily apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an image generating method according to an embodiment of the present invention;
FIG. 2 is a flowchart of another image generation method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating an implementation of an image generating method according to an embodiment of the present invention;
Fig. 4 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present invention;
Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms "first" and "second" and the like in the description and in the drawings are used for distinguishing between different objects or between different processes of the same object and not for describing a particular order of objects.
Furthermore, references to the terms "comprising" and "having" and any variations thereof in the description of the present application are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like. Furthermore, embodiments of the invention and features of the embodiments may be combined with each other without conflict.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more.
Three-dimensional rendering refers to the process of generating a two-dimensional image from a three-dimensional scene using a rendering engine in a computer. The method comprises the steps that a southwest corner, a northwest corner, a northeast corner and/or a southeast corner of each intersection in a three-dimensional scene are/is provided with a roadside bar, a plurality of roadside devices are arranged on the roadside bar, the roadside devices can acquire roadside data in real time and send the roadside data to a server, and the server can send the roadside data to a rendering engine so that the rendering engine renders the roadside data in the current display range to generate a rendering image.
But the rendering engine needs to determine whether the road side data is within the current display range, is computationally intensive and inefficient. Therefore, the application provides an image generation method to solve the problems of low rendering efficiency, rendering stuck and frequent calculation when a preset interface of a rendering engine displays a rendering image, and improve the rendering performance and the stability of the rendering engine.
The image generation method proposed by the present application will be described in detail with reference to drawings and embodiments.
Fig. 1 is a flowchart of an image generating method according to an embodiment of the present invention, where the method may be performed by an image generating apparatus, and specifically includes the following steps:
Step 110, determining a current view angle, and determining a rendering range according to the current view angle.
Specifically, after initializing the rendering engine, the initial view may be determined to be a default view. The server may continuously monitor the viewing angle adjustment operation on the rendering engine preset interface and determine the current viewing angle according to the viewing angle adjustment operation. The method comprises the steps that a visual angle adjustment control is arranged on a preset interface of the rendering engine, at the moment, the visual angle adjustment operation is based on the drag operation of the visual angle adjustment control, a server can continuously monitor the visual angle adjustment control on the preset interface of the rendering engine, and if the server receives the drag operation triggered by a user based on the visual angle adjustment control on the preset interface of the rendering engine, the current visual angle is determined according to the drag operation; and if the server does not receive the drag operation triggered by the user based on the visual angle adjustment control on the preset interface of the rendering engine, determining the default visual angle as the current visual angle. After determining the current view angle, a current view angle center may be determined, where the current view angle center is a center point coordinate of the main camera coordinate system corresponding to the current view angle. And then the current view angle center is taken as a rendering center, and the polygonal rendering range is determined according to the preset side length or the circular rendering range is determined according to the preset radius. The rendering range may be a predetermined area on the preset interface, which may correspond to a static object to be rendered in the predetermined range. For example, the rendering range may be a square area on the screen, which may correspond to a predetermined geographical location range according to the current viewing angle.
It should be noted that, after the drag operation, if the current view angle height corresponding to the current view angle is greater than the preset height, the resolution of the rendered image determined according to the current view angle is lower, at this time, the rendered image corresponding to the current view angle may be hidden, and meanwhile, the determined data to be rendered may not be sent to the rendering engine.
In the embodiment of the invention, the current view angle is determined according to the default view angle or the drag operation, and further, the rendering range is determined according to the current view angle center corresponding to the current view angle and the preset side length or the preset radius, so that the rendering engine can render objects in the rendering range, and the data processing capacity of the rendering engine is reduced.
And 120, screening the road side data acquired from the road side equipment according to the rendering range, and determining the data to be rendered according to the screening result.
The road side equipment is in communication connection with the server, and can send the acquired road side data to the server in real time.
Specifically, the roadside data may include data that all the roadside devices send to the server in the current three-dimensional scene, and therefore, the roadside data needs to be filtered according to the rendering range. The server stores position information of each acquisition position and position information of each device position, and after determining the rendering range, the target acquisition position may be determined according to the position information of the rendering range and the position information of each acquisition position, specifically, the acquisition position of the position information in the rendering range may be determined as the target acquisition position, and the road side data acquired by the road side device on the target device set at the target acquisition position may be determined as the data to be rendered, or the target device position may be determined according to the position information of the rendering range and the position information of each device position, specifically, the device position of the position information in the rendering range may be determined as the target device position, and the road side data acquired by the road side device set at the target device position may be determined as the data to be rendered.
For example, when the current three-dimensional scene is a street view, southwest, northwest, northeast and/or southeast corners of each intersection are provided with road side bars, on which a plurality of road side devices are provided. The server may store the position information of each intersection and the position information of each road side bar, and screen the road side data according to the rendering range, and may include: and determining the intersection with the position information in the rendering range as a target intersection, and determining the road side data acquired by the road side equipment on the road side bar set up by the target intersection as data to be rendered, or determining the road side bar with the position information in the rendering range as a target road side bar, and determining the road side data acquired by a plurality of road side equipment on the target road side bar as data to be rendered.
The server also stores identification information of each acquired position and identification information of each equipment position, and the road side data can comprise the identification information of the acquired position of the road side equipment for acquiring the road side data and the identification information of the equipment position. Further, after determining the rendering range, the target acquisition position may be determined according to the position information of the rendering range and the position information of each acquisition position, and the identification information of each target acquisition position may be determined, if the identification information of the target acquisition position includes the identification information of the acquisition position where the roadside device for acquiring the roadside data included in the roadside data is located, the roadside data may be determined as the data to be rendered, or the target device position may be determined according to the position information of the rendering range and the position information of each device position, and the identification information of each target device position may be determined, if the identification information of the target device position includes the identification information of the device position where the roadside device for acquiring the roadside data included in the roadside data is located, the roadside data may be determined as the data to be rendered.
For example, when the current three-dimensional scene is a street view, southwest, northwest, northeast and/or southeast corners of each intersection are provided with road side bars, on which a plurality of road side devices are provided. The server may further store an intersection ID of each intersection and a rod ID of each road side rod, and screen road side data according to a rendering range, and may include: determining an intersection with position information in a rendering range as a target intersection, determining a target intersection ID corresponding to the target intersection, determining an intersection ID where a road side device for acquiring road side data contained in road side data is located, if the target intersection ID contains the intersection ID, determining the road side data as data to be rendered, or determining a road side bar with position information in the rendering range as a target road side bar, determining a target bar ID corresponding to the target road side bar, determining a bar ID where the road side device for acquiring the road side data contained in the road side data is located, and if the target bar ID contains the bar ID, determining the road side data as data to be rendered.
In the embodiment of the invention, the screening of the road side data is realized according to the rendering range, and the road side data in the rendering range is determined as the data to be rendered.
And 130, sending the data to be rendered to a rendering engine so that the rendering engine renders the data to be rendered to generate a rendering image.
Specifically, the server may send the data to be rendered obtained by screening the road side data to the rendering engine, at this time, the data amount sent to the rendering engine is reduced, and the rendering engine may render the data to be rendered, so as to obtain a rendered image including the target object within the rendering range.
In the embodiment of the invention, on the premise of reducing the data processing amount of the rendering engine, the rendering of the road side data in the rendering range, namely the target road side data, namely the data to be rendered, is realized, and the rendering image is obtained. Because the data processing amount of the rendering engine is reduced, the rendering efficiency is low, the rendering frame rate is improved, the problem of rendering stuck in the rendering process is solved, and the stability of the rendering engine is further improved.
The image generation method provided by the embodiment of the invention comprises the following steps: determining a current view angle, and determining a rendering range according to the current view angle; screening the road side data acquired from the road side equipment according to the rendering range, and determining the data to be rendered according to the screening result; and sending the data to be rendered to a rendering engine so that the rendering engine renders the data to be rendered to generate a rendering image. According to the technical scheme, the rendering range can be determined according to the current view angle, the rendering range can indicate the range of the object to be displayed in the rendering image under the current view angle, the server can receive the road side data sent by the road side equipment, the road side data can comprise the data sent to the server by all the road side equipment under the current view angle, if the road side data sent to the server by all the road side equipment are sent to the rendering engine, the rendering engine renders the road side data to generate the rendering image, the calculated amount of the rendering engine is large, the rendering image comprises objects which are not needed to be displayed, therefore, the road side data sent to the server by the road side equipment are required to be screened according to the rendering range, the road side data acquired by the road side equipment in the rendering range are determined to be the data to be rendered, the data to be rendered is reduced compared with the data of all the road side data, and the data to be rendered are further sent to the rendering engine, so that the rendering engine renders the data to be rendered, and the rendering image containing the object to be rendered is generated. The method comprises the steps that road side data acquired by road side equipment, namely target road side equipment, of a rendering range determined according to a current view angle are determined to be data to be rendered in a server, the data to be rendered are sent to a rendering engine, the rendering engine only renders the data to be rendered to obtain a rendering image, the data processing amount of the rendering engine is reduced, the generated rendering image only contains a target object, the problems of low rendering efficiency, rendering blocking and frequent calculation when the rendering image is displayed on a preset interface are solved, and user experience is further improved.
Fig. 2 is a flowchart of another image generating method according to an embodiment of the present invention, which is embodied based on the above embodiment. As shown in fig. 2, in this embodiment, the method may further include:
step 210, determining the current viewing angle.
Specifically, after initializing the rendering engine, the initial view may be determined to be a default view. The server may continuously monitor the viewing angle adjustment operation on the rendering engine preset interface and determine the current viewing angle according to the viewing angle adjustment operation. For example, the preset interface of the rendering engine is provided with a view angle adjustment control, at this time, the view angle adjustment operation is a drag operation based on the view angle adjustment control, the server can continuously monitor the view angle adjustment control on the preset interface of the rendering engine, and if the server receives the drag operation triggered by the user based on the view angle adjustment control on the preset interface of the rendering engine, the server determines the current view angle according to the drag operation; and if the server does not receive the drag operation triggered by the user based on the visual angle adjustment control on the preset interface of the rendering engine, determining the default visual angle as the current visual angle.
That is, if the server receives the viewing angle adjustment operation, determining a current viewing angle according to the viewing angle adjustment operation; if the server does not receive the viewing angle adjustment operation, the default viewing angle is determined as the current viewing angle.
In one embodiment, step 210 may specifically include:
And monitoring a visual angle adjustment operation on a preset interface, and determining the current visual angle according to the visual angle adjustment operation.
Further, determining the current viewing angle according to the viewing angle adjustment operation includes:
determining that the viewing angle before the viewing angle adjustment operation is a first viewing angle, determining that the viewing angle after the viewing angle adjustment operation is a second viewing angle, and determining a second viewing angle height based on the second viewing angle; and if the height of the second visual angle is smaller than or equal to the preset height, determining the current visual angle according to the linear distance between the center point of the second visual angle and the center point of the first visual angle.
Specifically, the second viewing angle height may be determined according to a scaling ratio of the second viewing angle, and if the second viewing angle height is greater than the preset height, the resolution of the rendered image determined according to the current viewing angle height is low, at which time the rendered image may be hidden. If the second perspective height is less than or equal to the preset height, it may be determined that the current perspective height is suitable for determining the rendered image. At this time, a straight line distance between the center point of the second viewing angle and the center point of the first viewing angle may be determined, and the current viewing angle may be determined according to the straight line distance between the center point of the second viewing angle and the center point of the first viewing angle.
Further, determining the current viewing angle according to a linear distance between the center point of the second viewing angle and the center point of the first viewing angle includes:
if the linear distance is greater than a preset distance, determining the second viewing angle as the current viewing angle; otherwise, the first view angle is determined as the current view angle.
Specifically, if the straight line distance is small, it indicates that the second viewing angle is not greatly different from the first viewing angle, and the rendered image corresponding to the second viewing angle is not greatly different from the rendered image corresponding to the first viewing angle, so the first viewing angle can be determined as the current viewing angle. In practical applications, the rendered image corresponding to the first view angle may also be determined as the rendered image corresponding to the current view angle. If the linear distance is large, indicating that the second viewing angle is greatly different from the first viewing angle, the second viewing angle may be determined as a drag viewing angle. And determining whether to update the current view angle according to the linear distance between the center point of the second view angle and the center point of the first view angle, and when the linear distance is smaller than or equal to the preset distance, not updating the current view angle, so that frequent updating of the current view angle is reduced, and the calculated amount of a rendering engine is further reduced.
In practical application, the preset distance may be 150 meters, that is, when the linear distance between the center point of the second viewing angle and the center point of the first viewing angle is greater than 150 meters, the current viewing angle is updated to determine the second viewing angle as the current viewing angle, otherwise, the current viewing angle is not updated, and the first viewing angle is determined as the current viewing angle.
In addition, after the rendering engine is initialized, in the process of confirming the current view angle, if the view angle adjustment operation is the first drag operation, the current view angle is determined according to the linear distance between the center point of the second view angle and the center point of the default view angle. If the view angle adjustment operation is an intermediate drag operation and the straight line distance between the center point of each second view angle and the center point of the default view angle is less than or equal to 150 meters in the drag process, the second view angle can be determined to be the default view angle which is not updated all the time, and then the current view angle can be continuously determined according to the straight line distance between the center point of the second view angle corresponding to the intermediate drag operation and the center point of the default view angle; if the dragging operation is an intermediate dragging operation and the linear distance between the center point of any second view angle and the center point of the default view angle is greater than 150 meters in the dragging process, the second view angle can be determined to be updated, and then the current view angle can be determined according to the linear distance between the center point of the second view angle and the center point of the first view angle.
In the embodiment of the invention, the current view angle can be determined according to the default view angle or the second view angle, the update times of the current view angle are reduced, the stability of the rendering engine is further improved, and when the current view angle is determined according to the second view angle, if the linear distance between the center point of the second view angle and the center point of the first view angle is smaller than or equal to the preset distance, the first view angle can be determined as the current view angle without updating the current view angle, the update times of the current view angle are further reduced, and the problem that the rendering engine is easy to be blocked is solved.
And 220, determining a rendering range according to the current view angle.
In one embodiment, step 220 may specifically include:
determining a current center point according to the current view angle; and determining the current center point as a rendering center, and determining the rendering range according to the rendering center and a preset rendering side length or a preset rendering radius.
The preset rendering side length or the preset rendering radius may be 800 meters.
Specifically, the center point coordinate of the current camera coordinate system corresponding to the current view angle is determined, the current center point is determined according to the center point coordinate, and then the current center point can be determined as the rendering center, and the current center point extends up, down, left and right by 800 meters to form a square range, wherein the square range can be the rendering range. The current center point may also be determined as a rendering center, and a circular range may be formed by extending with a radius of 800 meters.
In the embodiment of the invention, after the current view angle is determined, the rendering range corresponding to the current view angle can be determined, and the rendering range indicates the range required to be rendered by the rendering engine under the current view angle, namely, the rendering engine only needs to render objects in the rendering range and does not need to render objects outside the rendering range, so that the rendering range can be used for screening data sent to the rendering engine and used for rendering so as to reduce the data quantity received by the rendering engine.
And 230, determining target road side equipment according to the rendering range.
Specifically, the target roadside device may be determined according to the target acquisition position within the rendering range, or may be determined according to the target device position within the rendering range. Specifically, the target acquisition position may be determined based on the position information or the identification information of the acquisition position within the rendering range, and the target device position may be determined based on the position information or the identification information of the device position within the rendering range.
When the current three-dimensional scene is a street view, southwest corners, northwest corners, northeast corners and/or southeast corners of each intersection are provided with road side bars, and a plurality of road side devices are arranged on the road side bars.
In one embodiment, step 230 may specifically include:
and determining a target acquisition position according to the rendering range, and determining the target road side equipment according to the target acquisition position.
The server may store position information of each intersection, determine a target acquisition position according to the rendering range, and determine the target roadside device according to the target acquisition position, and may include: and determining the intersection with the position information in the rendering range as a target intersection, and further determining the road side equipment on the road side rod set up by the target intersection as target road side equipment.
The server may further store identification information of each intersection, determine a target acquisition position according to the rendering range, and determine the target roadside device according to the target acquisition position, and may include: and determining the intersection with the position information in the rendering range as a target intersection, determining a target intersection ID corresponding to the target intersection, and determining the road side equipment as target road side equipment if the intersection ID of the intersection where the road side rod where the road side equipment is located is contained in the target intersection ID.
In the embodiment of the invention, the target road side equipment can be determined according to the rendering range and the position information or the identification information of each acquired position, namely, the target road side equipment can be determined according to the rendering range and the position information of the intersection or the intersection ID, and the screening of the road side equipment is realized.
In another embodiment, step 230 may specifically include:
and determining a target equipment position according to the rendering range, and determining the target road side equipment according to the target equipment position.
The server may store position information of each road side bar, and determine a target device position according to the rendering range, and may include: and determining the road side bar with the position information in the rendering range as a target road side bar, and further determining the road side equipment on the target road side bar as target road side equipment.
The server may further store identification information of each road side bar, and determine a target device position according to the rendering range, which may include: and determining a road side bar with position information in a rendering range as a target road side bar, and determining a target bar ID corresponding to the target road side bar, and determining the road side equipment as target road side equipment if the bar ID of the road side bar where the road side equipment is located is contained in the target bar ID.
In the embodiment of the invention, the target road side equipment can be determined according to the rendering range and the position information or the identification information of the positions of the equipment, namely, the target road side equipment can be determined according to the rendering range and the position information or the rod ID of the road side rod, so that the screening of the road side equipment is realized.
And step 240, screening the target road side data acquired by the target road side equipment from the road side data acquired based on the road side equipment, and determining the data to be rendered according to the target road side data.
Specifically, the server may receive the road side data sent by all the road side devices, and the present application requires the target road side data acquired by the target road side device, so the road side data needs to be screened to obtain the target road side device.
The road side data can comprise an intersection ID and a rod ID where road side equipment for acquiring the road side data is located. In one aspect, if the intersection ID of the intersection at which the roadside device that acquires the roadside data is located is included in the target intersection ID, the roadside data is determined to be the target roadside data. On the other hand, if the rod ID of the road side rod on which the road side device that acquires the road side data is located is included in the target rod ID, the road side data is determined to be the target road side data.
Of course, since one intersection may include at least one road side bar, the calculation amount of screening the target road side data in the road side data according to the position information or the bar ID of the road side bar is slightly larger than the screening of the target road side data in the road side data according to the position information or the intersection ID of the intersection. When the position information or the intersection ID of the intersection and the position information or the bar ID of the road side bar are stored in the server, the target road side data is generally selected from the road side data according to the position information or the intersection ID of the intersection. When only the position information or the bar ID of the road side bar is stored in the server, the target road side data is selected from the road side data based on the position information or the bar ID of the road side bar.
In one embodiment, the roadside device includes an image acquisition device and a radar device, the target roadside data includes target image data and target radar data, and accordingly, the data to be rendered is determined according to the target roadside data, and further includes:
And carrying out data fusion on the target image data and the target radar data to obtain the data to be rendered.
The image acquisition device is used for acquiring image data in an acquisition range of the image acquisition device, the acquisition range of the image acquisition device can be 200 meters, the radar device is used for acquiring radar data in the acquisition range of the radar device, and the acquisition range of the radar device can be 200 meters. The radar apparatus may include a laser radar for determining information of the number, position, distance, speed, acceleration, and the like of the objects within the acquisition range, and a millimeter wave radar for determining distance information between the objects within the acquisition range and surrounding objects.
Specifically, the image data acquired by the image acquisition device and the radar data acquired by the radar device may be subjected to data fusion based on a fusion perception algorithm to obtain rendering data.
According to the embodiment of the invention, the screening of the road side equipment is realized according to the rendering range, the target road side equipment is obtained, the screening of the road side data is further realized, the data acquired by the target road side equipment is determined to be the target road side data, the target road side data acquired by the target road side equipment is obtained by screening from the road side data acquired by all the road side equipment, the data quantity is reduced, and the road side data outside the rendering range is screened, so that the acquired target road side data meets the requirements of users. And then the road side data in the rendering range, namely the target road side data, can be determined as the data to be rendered.
Step 250, sending the data to be rendered to a rendering engine, so that the rendering engine renders the data to be rendered to generate a rendering image.
Specifically, the server may send the screened data to be rendered to the rendering engine, at this time, the amount of data sent to the rendering engine is reduced, and the rendering engine may render the data to be rendered to obtain a rendered image containing the target object within the rendering range.
In the embodiment of the invention, on the premise of reducing the data processing amount of the rendering engine, the rendering of the road side data in the rendering range, namely the target road side equipment, namely the data to be rendered, is realized, and the rendering image is obtained. Because the data processing amount of the rendering engine is reduced, the rendering efficiency is low, the rendering frame rate is improved, the problem of rendering stuck easily in the rendering process is solved, and the stability of the rendering engine is further improved.
The image generation method provided by the embodiment of the invention comprises the following steps: determining a current view angle, and determining a rendering range according to the current view angle; screening the road side data acquired from the road side equipment according to the rendering range, and determining the data to be rendered according to the screening result; and sending the data to be rendered to a rendering engine so that the rendering engine renders the data to be rendered to generate a rendering image. According to the technical scheme, the current view angle can be determined according to the default view angle or the second view angle which is the view angle before the view angle adjustment operation, the update times of the current view angle are reduced, and when the current view angle is determined according to the second view angle, whether the current view angle is updated is determined according to the straight line distance between the center point of the view angle which is the view angle after the view angle adjustment operation and the center point of the first view angle which is the view angle before the view angle adjustment operation, so that the update times of the current view angle are further reduced, a rendering range can be determined according to the current view angle, the rendering range can indicate the range of a target object to be displayed in a rendering image, the server can receive the road side data sent by the road side equipment, the road side data can comprise the data sent by all the road side equipment in the current scene to the server, if the road side data sent by all the road side equipment to the server are sent to the rendering engine, the road side data are rendered by the rendering engine, the calculated amount of the rendering engine is enabled to be larger, and the object which is not required to be displayed is contained in the rendering image, the road side data are required to be rendered, compared with the road side data which are required to be rendered, the obtained, and the data are required to be rendered, and the object to be rendered. The method comprises the steps that road side data acquired by road side equipment, namely target road side equipment, of a rendering range determined according to a current view angle are determined to be data to be rendered in a server, the data to be rendered are sent to a rendering engine, the rendering engine only renders the data to be rendered to obtain a rendering image, the data processing amount of the rendering engine is reduced, the generated rendering image only contains a target object, the problems of low rendering efficiency, rendering blocking and frequent calculation when the rendering image is displayed on a preset interface are solved, and user experience is further improved.
Fig. 3 is a flowchart of an implementation of an image generating method according to an embodiment of the present invention, and an implementation of the method is given by way of example. As shown in fig. 3, includes:
Step 310, initializing a rendering engine and determining the initial view as a default view.
Step 311, determining whether the server receives a viewing angle adjustment operation triggered by the user based on a viewing angle adjustment control on a rendering engine preset interface.
If the server does not receive the user's view angle adjustment operation based on the rendering engine preset interface, step 312 is performed, otherwise step 313 is performed.
Step 312, determining the default view as the current view.
Step 320 is performed after step 312.
Step 313, determining that the viewing angle before the viewing angle adjustment operation is the first viewing angle, determining that the viewing angle after the viewing angle adjustment operation is the second viewing angle, and determining the second viewing angle height based on the second viewing angle.
Step 314, determining whether the second viewing angle height is greater than a preset height.
If the viewing angle height is greater than the preset height, step 315 is performed; otherwise, step 316 is performed.
Step 315, hiding the rendered image.
Step 316, determining a linear distance between the center point of the second viewing angle and the center point of the first viewing angle.
Step 317, determining whether the linear distance is greater than a preset distance.
If the linear distance is not greater than the preset distance, then step 318 is performed; otherwise, step 319 is performed.
Step 318, determining the first viewing angle as the current viewing angle.
Step 320 is performed after step 318.
Step 319, determining the second viewing angle as the current viewing angle.
Step 320, determining a current center point according to the current view angle; the current center point is determined to be a rendering center, and a rendering range is determined according to the rendering center and a preset rendering side length or a preset rendering radius.
Step 321, determining a target intersection ID in a rendering range.
Step 322, determining whether the intersection ID of the road side device acquiring the road side data is included in the target intersection ID.
If the intersection ID where the road side device that acquires the road side data is located is included in the target intersection ID, step 323 is executed, otherwise step 324 is executed.
Step 323, determining the road side data as target road side data, determining the target road side data as data to be rendered, and sending the data to be rendered to a rendering engine, so that the rendering engine renders the data to generate a rendering image.
Step 324, sifts out the roadside data without sending it to the rendering engine.
According to the implementation mode of the image generation method, the current view angle can be determined according to the default view angle or the view angle after the view angle adjustment operation, namely the second view angle, the update times of the current view angle are reduced, and when the current view angle is determined according to the second view angle, whether the current view angle is updated or not is determined according to the straight line distance between the center point of the view angle after the view angle adjustment operation, namely the second view angle, and the center point of the first view angle, namely the view angle before the view angle adjustment operation, further the update times of the current view angle are reduced, in addition, when the second view angle is determined to be higher than the preset height, the rendering image is hidden, the rendering frame rate is improved, the rendering range can be determined according to the current view angle, the rendering range can indicate the range of a target object to be displayed in the rendering image under the current view angle, the server can receive the road side data sent by the road side equipment, the road side data can comprise the road side data sent to the server under the current scene, therefore the road side data sent to the server by the road side equipment according to the rendering range is needed to be screened, the road side data of the road side equipment in the range, the target road side equipment to be acquired by the target road side equipment is rendered, the target road side data under the rendering data is needed, the rendering data is obtained by the rendering engine is rendered, the rendering data is further, the rendering data is determined, the rendering data is contained, the rendering data is rendered, compared with the rendering data, and the rendering data is generated, and the rendering data is the target to the target data is needed. The method comprises the steps that road side data acquired by road side equipment, namely target road side equipment, of a rendering range determined according to a current view angle are determined to be data to be rendered in a server, the data to be rendered are sent to a rendering engine to be rendered, the rendering engine only renders the data to be rendered to obtain a rendering image, the data processing amount of the rendering engine is reduced, the generated rendering image only contains a target object, the problems of low rendering efficiency, rendering blocking and frequent calculation when the rendering image is displayed on a preset interface are solved, and user experience is further improved.
Fig. 4 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present invention, where the apparatus may be adapted to reduce the calculation amount of a rendering engine when generating a rendered image and improve the stability of the rendering engine. The apparatus may be implemented in software and/or hardware and is typically integrated in a server.
As shown in fig. 4, the apparatus includes:
A determining module 410, configured to determine a current viewing angle, and determine a rendering range according to the current viewing angle;
The screening module 420 is configured to screen the road side data acquired from the road side device according to the rendering range, and determine the data to be rendered according to the screening result;
the rendering module 430 is configured to send the data to be rendered to a rendering engine, so that the rendering engine renders the data to be rendered, and generates a rendered image including a perception target.
The image generation device provided by the embodiment determines a rendering range according to a current view angle by determining the current view angle; screening the road side data acquired from the road side equipment according to the rendering range, and determining the data to be rendered according to the screening result; and sending the data to be rendered to a rendering engine so that the rendering engine renders the data to be rendered to generate a rendering image. According to the technical scheme, the rendering range can be determined according to the current view angle, the rendering range can indicate the range of the object to be displayed in the rendering image under the current view angle, the server can receive the road side data sent by the road side equipment, the road side data can comprise the data sent to the server by all the road side equipment under the current view angle, if the road side data sent to the server by all the road side equipment are sent to the rendering engine, the rendering engine renders the road side data to generate the rendering image, the calculated amount of the rendering engine is large, the rendering image comprises objects which are not needed to be displayed, therefore, the road side data sent to the server by the road side equipment are required to be screened according to the rendering range, the road side data acquired by the road side equipment in the rendering range are determined to be the data to be rendered, the data to be rendered is reduced compared with the data of all the road side data, and the data to be rendered are further sent to the rendering engine, so that the rendering engine renders the data to be rendered, and the rendering image containing the object to be rendered is generated. The method comprises the steps that road side data acquired by road side equipment, namely target road side equipment, of a rendering range determined according to a current view angle are determined to be data to be rendered in a server, the data to be rendered are sent to a rendering engine, the rendering engine only renders the data to be rendered to obtain a rendering image, the data processing amount of the rendering engine is reduced, the generated rendering image only contains a target object, the problems of low rendering efficiency, rendering blocking and frequent calculation when the rendering image is displayed on a preset interface are solved, and user experience is further improved.
Based on the above embodiment, the determining module 410 is specifically configured to:
Monitoring a visual angle adjustment operation on a preset interface, and determining the current visual angle according to the visual angle adjustment operation;
determining a current center point according to the current view angle; and determining the current center point as a rendering center, and determining the rendering range according to the rendering center and a preset rendering side length or a preset rendering radius.
In one embodiment, determining the current viewing angle according to the viewing angle adjustment operation includes:
determining that the viewing angle before the viewing angle adjustment operation is a first viewing angle, determining that the viewing angle after the viewing angle adjustment operation is a second viewing angle, and determining a second viewing angle height based on the second viewing angle; and if the height of the second visual angle is smaller than or equal to the preset height, determining the current visual angle according to the linear distance between the center point of the second visual angle and the center point of the first visual angle.
Further, determining the current viewing angle according to a linear distance between the center point of the second viewing angle and the center point of the first viewing angle includes:
if the linear distance is greater than a preset distance, determining the second viewing angle as the current viewing angle; otherwise, the first view angle is determined as the current view angle.
Based on the above embodiment, the screening module 420 is specifically configured to:
determining target road side equipment according to the rendering range;
And screening the target road side data acquired by the target road side equipment from the road side data acquired based on the road side equipment, and determining the data to be rendered according to the target road side data.
In one embodiment, determining a target roadside device according to the rendering range includes:
And determining a target acquisition position according to the rendering range, and determining the target road side equipment according to the target acquisition position, wherein the target acquisition position is an acquisition position of position information in the rendering range.
In another embodiment, determining the target roadside device according to the rendering range includes:
and determining a target equipment position according to the rendering range, and determining the target road side equipment according to the target equipment position.
Further, the roadside device includes an image acquisition device and a radar device, the target roadside data includes target image data and target radar data, and accordingly, the data to be rendered is determined according to the target roadside data, and the method further includes:
And carrying out data fusion on the target image data and the target radar data to obtain the data to be rendered.
The image generating device provided by the embodiment of the invention can execute the image generating method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
It should be noted that, in the embodiment of the image generating apparatus described above, each unit and module included is only divided according to the functional logic, but is not limited to the above-described division, as long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention. Fig. 5 shows a block diagram of an exemplary computer device 5 suitable for use in implementing embodiments of the invention. The computer device 5 shown in fig. 5 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments of the invention.
As shown in fig. 5, the computer device 5 is in the form of a general purpose computing electronic device. The components of the computer device 5 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
The computer device 5 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 5 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The computer device 5 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard disk drive"). Although not shown in fig. 5, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. The system memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
The computer device 5 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the computer device 5, and/or any devices (e.g., network card, modem, etc.) that enable the computer device 5 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Moreover, the computer device 5 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through a network adapter 20. As shown in fig. 5, the network adapter 20 communicates with other modules of the computer device 5 via the bus 18. It should be appreciated that although not shown in fig. 5, other hardware and/or software modules may be used in connection with computer device 5, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and page display by running a program stored in the system memory 28, for example, implementing an image generation method provided by the present embodiment, the method including:
determining a current view angle, and determining a rendering range according to the current view angle;
screening the road side data acquired from the road side equipment according to the rendering range, and determining the data to be rendered according to the screening result;
and sending the data to be rendered to a rendering engine so that the rendering engine renders the data to be rendered to generate a rendering image.
Of course, those skilled in the art will appreciate that the processor may also implement the technical solution of the image generating method provided in any embodiment of the present invention.
An embodiment of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an image generation method such as provided by the embodiment of the present invention, the method including:
determining a current view angle, and determining a rendering range according to the current view angle;
screening the road side data acquired from the road side equipment according to the rendering range, and determining the data to be rendered according to the screening result;
and sending the data to be rendered to a rendering engine so that the rendering engine renders the data to be rendered to generate a rendering image.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium may be, for example, but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
It will be appreciated by those of ordinary skill in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be centralized on a single computing device, or distributed over a network of computing devices, or they may alternatively be implemented in program code executable by a computer device, such that they are stored in a memory device and executed by the computing device, or they may be separately fabricated as individual integrated circuit modules, or multiple modules or steps within them may be fabricated as a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
In addition, the technical scheme of the invention can acquire, store, use, process and the like the data, which accords with the relevant regulations of national laws and regulations.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.
Claims (9)
1. An image generation method, comprising:
Monitoring a visual angle adjustment operation on a preset interface; the visual angle adjustment operation is a drag operation based on a visual angle adjustment control;
determining a current viewing angle according to the viewing angle adjustment operation, including: if a drag operation triggered by a user based on the visual angle adjustment control is received, determining a current visual angle according to the drag operation; if the drag operation is not received, determining a default view angle as a current view angle;
Determining a current center point according to the current view angle; determining the current center point as a rendering center, and determining a rendering range according to the rendering center and a preset rendering side length or a preset rendering radius;
determining target road side equipment according to the rendering range;
screening target road side data acquired by target road side equipment from road side data acquired based on the road side equipment, and determining data to be rendered according to the target road side data;
Transmitting the data to be rendered to a rendering engine so that the rendering engine renders the data to be rendered to generate a rendering image; the rendering image is a two-dimensional rendering image generated by rendering the data to be rendered determined by the target road side data in the three-dimensional scene by using the rendering engine.
2. The image generation method according to claim 1, wherein determining the current viewing angle according to the viewing angle adjustment operation further comprises:
Determining that the viewing angle before the viewing angle adjustment operation is a first viewing angle, determining that the viewing angle after the viewing angle adjustment operation is a second viewing angle, and determining a second viewing angle height based on the second viewing angle;
And if the height of the second visual angle is smaller than or equal to the preset height, determining the current visual angle according to the linear distance between the center point of the second visual angle and the center point of the first visual angle.
3. The image generation method according to claim 2, wherein determining the current view angle according to a straight line distance of a center point of the second view angle from a center point of the first view angle includes:
if the linear distance is greater than a preset distance, determining the second viewing angle as the current viewing angle; otherwise, the first view angle is determined as the current view angle.
4. The image generation method according to claim 1, wherein determining a target roadside apparatus from the rendering range includes:
And determining a target acquisition position according to the rendering range, and determining the target road side equipment according to the target acquisition position, wherein the target acquisition position is an acquisition position of position information in the rendering range.
5. The image generation method according to claim 1, wherein determining a target roadside apparatus from the rendering range includes:
and determining a target equipment position according to the rendering range, and determining the target road side equipment according to the target equipment position.
6. The image generation method according to claim 1, wherein the roadside apparatus includes an image acquisition device and a radar device, the target roadside data includes target image data and target radar data, and the data to be rendered is determined from the target roadside data accordingly, further comprising:
And carrying out data fusion on the target image data and the target radar data to obtain the data to be rendered.
7. An image generating apparatus, comprising:
The determining module is used for monitoring the visual angle adjustment operation on the preset interface; the visual angle adjustment operation is a drag operation based on a visual angle adjustment control; the current visual angle is determined according to the visual angle adjustment operation, and the visual angle adjustment method is specifically used for: if a drag operation triggered by a user based on the visual angle adjustment control is received, determining a current visual angle according to the drag operation; if the drag operation is not received, determining a default view angle as a current view angle; determining a current center point according to the current view angle; determining the current center point as a rendering center, and determining a rendering range according to the rendering center and a preset rendering side length or a preset rendering radius;
the screening module is used for determining target road side equipment according to the rendering range, screening target road side data acquired by the target road side equipment from road side data acquired based on the road side equipment, and determining data to be rendered according to the target road side data;
The rendering module is used for sending the data to be rendered to a rendering engine so that the rendering engine renders the data to be rendered and generates a rendering image; the rendering image is a two-dimensional rendering image generated by rendering the data to be rendered determined by the target road side data in the three-dimensional scene by using the rendering engine.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the image generation method of any of claims 1-6 when the program is executed by the processor.
9. A storage medium containing computer executable instructions for performing the image generation method of any of claims 1-6 when executed by a computer processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210767384.4A CN115018967B (en) | 2022-06-30 | 2022-06-30 | Image generation method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210767384.4A CN115018967B (en) | 2022-06-30 | 2022-06-30 | Image generation method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115018967A CN115018967A (en) | 2022-09-06 |
CN115018967B true CN115018967B (en) | 2024-05-03 |
Family
ID=83079283
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210767384.4A Active CN115018967B (en) | 2022-06-30 | 2022-06-30 | Image generation method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115018967B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116150520B (en) * | 2022-12-30 | 2023-11-14 | 联通智网科技股份有限公司 | Data processing method, device, equipment and storage medium |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971392A (en) * | 2013-01-31 | 2014-08-06 | 北京四维图新科技股份有限公司 | Navigation-oriented three-dimensional video data processing method and device, system and terminal |
KR20140140442A (en) * | 2013-05-29 | 2014-12-09 | 한국항공대학교산학협력단 | Information System based on mobile augmented reality |
CN105913478A (en) * | 2015-12-28 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | 360-degree panorama display method and display module, and mobile terminal |
CN110244741A (en) * | 2019-06-28 | 2019-09-17 | 奇瑞汽车股份有限公司 | Control method, device and the storage medium of intelligent automobile |
CN111402374A (en) * | 2018-12-29 | 2020-07-10 | 曜科智能科技(上海)有限公司 | Method, device, equipment and storage medium for fusing multi-channel video and three-dimensional model |
CN111835998A (en) * | 2019-04-13 | 2020-10-27 | 长沙智能驾驶研究院有限公司 | Beyond-visual-range panoramic image acquisition method, device, medium, equipment and system |
CN111833627A (en) * | 2019-04-13 | 2020-10-27 | 长沙智能驾驶研究院有限公司 | Vehicle visual range expansion method, device and system and computer equipment |
CN111882634A (en) * | 2020-07-24 | 2020-11-03 | 上海米哈游天命科技有限公司 | Image rendering method, device and equipment and storage medium |
CN112033425A (en) * | 2019-06-04 | 2020-12-04 | 长沙智能驾驶研究院有限公司 | Vehicle driving assistance method and device, computer equipment and storage medium |
CN112215048A (en) * | 2019-07-12 | 2021-01-12 | 中国移动通信有限公司研究院 | 3D target detection method and device and computer readable storage medium |
CN112235604A (en) * | 2020-10-20 | 2021-01-15 | 广州博冠信息科技有限公司 | Rendering method and device, computer readable storage medium and electronic device |
CN112288825A (en) * | 2020-10-29 | 2021-01-29 | 北京百度网讯科技有限公司 | Camera calibration method and device, electronic equipment, storage medium and road side equipment |
CN112585963A (en) * | 2018-07-05 | 2021-03-30 | Pcms控股公司 | Method and system for 3D-aware near-to-eye focal plane overlay of content on 2D displays |
CN113232661A (en) * | 2021-05-28 | 2021-08-10 | 广州小鹏汽车科技有限公司 | Control method, vehicle-mounted terminal and vehicle |
CN113450390A (en) * | 2021-09-01 | 2021-09-28 | 智道网联科技(北京)有限公司 | Target tracking method and device based on road side camera and electronic equipment |
CN113483771A (en) * | 2021-06-30 | 2021-10-08 | 北京百度网讯科技有限公司 | Method, device and system for generating live-action map |
CN114485690A (en) * | 2021-12-29 | 2022-05-13 | 北京百度网讯科技有限公司 | Navigation map generation method and device, electronic equipment and storage medium |
CN114564268A (en) * | 2022-03-01 | 2022-05-31 | 阿波罗智联(北京)科技有限公司 | Equipment management method and device, electronic equipment and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10366290B2 (en) * | 2016-05-11 | 2019-07-30 | Baidu Usa Llc | System and method for providing augmented virtual reality content in autonomous vehicles |
US20200041995A1 (en) * | 2018-10-10 | 2020-02-06 | Waymo Llc | Method for realtime remote-operation of self-driving cars by forward scene prediction. |
US20200346114A1 (en) * | 2019-04-30 | 2020-11-05 | Microsoft Technology Licensing, Llc | Contextual in-game element recognition and dynamic advertisement overlay |
-
2022
- 2022-06-30 CN CN202210767384.4A patent/CN115018967B/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971392A (en) * | 2013-01-31 | 2014-08-06 | 北京四维图新科技股份有限公司 | Navigation-oriented three-dimensional video data processing method and device, system and terminal |
KR20140140442A (en) * | 2013-05-29 | 2014-12-09 | 한국항공대학교산학협력단 | Information System based on mobile augmented reality |
CN105913478A (en) * | 2015-12-28 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | 360-degree panorama display method and display module, and mobile terminal |
CN112585963A (en) * | 2018-07-05 | 2021-03-30 | Pcms控股公司 | Method and system for 3D-aware near-to-eye focal plane overlay of content on 2D displays |
CN111402374A (en) * | 2018-12-29 | 2020-07-10 | 曜科智能科技(上海)有限公司 | Method, device, equipment and storage medium for fusing multi-channel video and three-dimensional model |
CN111833627A (en) * | 2019-04-13 | 2020-10-27 | 长沙智能驾驶研究院有限公司 | Vehicle visual range expansion method, device and system and computer equipment |
CN111835998A (en) * | 2019-04-13 | 2020-10-27 | 长沙智能驾驶研究院有限公司 | Beyond-visual-range panoramic image acquisition method, device, medium, equipment and system |
CN112033425A (en) * | 2019-06-04 | 2020-12-04 | 长沙智能驾驶研究院有限公司 | Vehicle driving assistance method and device, computer equipment and storage medium |
CN110244741A (en) * | 2019-06-28 | 2019-09-17 | 奇瑞汽车股份有限公司 | Control method, device and the storage medium of intelligent automobile |
CN112215048A (en) * | 2019-07-12 | 2021-01-12 | 中国移动通信有限公司研究院 | 3D target detection method and device and computer readable storage medium |
CN111882634A (en) * | 2020-07-24 | 2020-11-03 | 上海米哈游天命科技有限公司 | Image rendering method, device and equipment and storage medium |
CN112235604A (en) * | 2020-10-20 | 2021-01-15 | 广州博冠信息科技有限公司 | Rendering method and device, computer readable storage medium and electronic device |
CN112288825A (en) * | 2020-10-29 | 2021-01-29 | 北京百度网讯科技有限公司 | Camera calibration method and device, electronic equipment, storage medium and road side equipment |
CN113232661A (en) * | 2021-05-28 | 2021-08-10 | 广州小鹏汽车科技有限公司 | Control method, vehicle-mounted terminal and vehicle |
CN113483771A (en) * | 2021-06-30 | 2021-10-08 | 北京百度网讯科技有限公司 | Method, device and system for generating live-action map |
CN113450390A (en) * | 2021-09-01 | 2021-09-28 | 智道网联科技(北京)有限公司 | Target tracking method and device based on road side camera and electronic equipment |
CN114485690A (en) * | 2021-12-29 | 2022-05-13 | 北京百度网讯科技有限公司 | Navigation map generation method and device, electronic equipment and storage medium |
CN114564268A (en) * | 2022-03-01 | 2022-05-31 | 阿波罗智联(北京)科技有限公司 | Equipment management method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115018967A (en) | 2022-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8928657B2 (en) | Progressive disclosure of indoor maps | |
US8243102B1 (en) | Derivative-based selection of zones for banded map display | |
US9417777B2 (en) | Enabling quick display transitions between indoor and outdoor map data | |
US7612777B2 (en) | Animation generating apparatus, animation generating method, and animation generating program | |
US9134886B2 (en) | Providing indoor facility information on a digital map | |
CN108961165B (en) | Method and device for loading image | |
AU2020295360B9 (en) | Spatial processing for map geometry simplification | |
EP2766876B1 (en) | Use of banding to optimize map rendering in a three-dimensional tilt view | |
US20190005665A1 (en) | Presenting markup in a scene using depth fading | |
CN111882632B (en) | Surface detail rendering method, device, equipment and storage medium | |
CN115018967B (en) | Image generation method, device, equipment and storage medium | |
CN116363082A (en) | Collision detection method, device, equipment and program product for map elements | |
CN111862342A (en) | Texture processing method and device for augmented reality, electronic equipment and storage medium | |
US10262631B1 (en) | Large scale highly detailed model review using augmented reality | |
CN115830261A (en) | 2D map and 3D twin live-action combined shipping monitoring method and equipment | |
CN115830207A (en) | Three-dimensional scene roaming method, device, equipment and medium | |
KR20160143936A (en) | Method for increasing 3D rendering performance and system thereof | |
CN109887078B (en) | Sky drawing method, device, equipment and medium | |
CN111429576B (en) | Information display method, electronic device, and computer-readable medium | |
CN113570256A (en) | Data processing method and device applied to city planning, electronic equipment and medium | |
CN110910482A (en) | Method, system and readable storage medium for organizing and scheduling video data | |
CN117197408B (en) | Automatic avoiding method, device, medium and equipment for label based on osgEarth D simulation environment | |
CN116150520B (en) | Data processing method, device, equipment and storage medium | |
CN115033324B (en) | Method and device for displaying diagrams in three-dimensional space page, electronic equipment and medium | |
KR20070046902A (en) | Displaying view ports within a large desktop area |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |