CN110400373B - Video information processing method and server - Google Patents

Video information processing method and server Download PDF

Info

Publication number
CN110400373B
CN110400373B CN201810380249.8A CN201810380249A CN110400373B CN 110400373 B CN110400373 B CN 110400373B CN 201810380249 A CN201810380249 A CN 201810380249A CN 110400373 B CN110400373 B CN 110400373B
Authority
CN
China
Prior art keywords
information
base station
video information
target area
cell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810380249.8A
Other languages
Chinese (zh)
Other versions
CN110400373A (en
Inventor
程日涛
王乐
汪况伦
骆润
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Design Institute Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Design Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Design Institute Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201810380249.8A priority Critical patent/CN110400373B/en
Publication of CN110400373A publication Critical patent/CN110400373A/en
Application granted granted Critical
Publication of CN110400373B publication Critical patent/CN110400373B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a video information processing method and a server. The method is applied to a server, and comprises the following steps: receiving video information transmitted by a base station in a target area, and extracting ground objects in the video information; the video information is panoramic video information collected at a preset position of a cell of the base station; determining the ground object information of each ground object; the ground object information at least comprises position information, three-dimensional information and material information of the ground object; and generating a three-dimensional model of the target area according to the ground feature information and a first preset algorithm, and sending the three-dimensional model to a Virtual Reality (VR) terminal. The method can accurately and comprehensively acquire the ground object and the ground object information, intuitively display the ground object condition of the target area, and achieve the technical effect of on-site investigation of planning and designing personnel.

Description

Video information processing method and server
Technical Field
The embodiment of the invention relates to the technical field of mobile communication, in particular to a video information processing method and a server.
Background
The radio network planning and design refers to the process of primarily planning the geographic position of the base station after obtaining the radio network planning requirement of the planning area, which is completed by the relevant steps of radio environment analysis, scale estimation, topology structure design, network pre-planning simulation and the like.
At present, the problem of inaccurate ground object information acquisition exists in the planning and design of a wireless network. Specifically, in the process of determining the requirement of the wireless network planning, a digital map is generally adopted to acquire the ground feature information of the wireless propagation environment, the highest precision is usually 5 meters, the precision is low, and sufficient three-dimensional (3D) ground feature information such as tree height and the like is lacked. Meanwhile, the timeliness of the ground object information is lower due to the fact that the updating time of the map is slower. The ground object information in the wireless propagation model is basic data of network simulation analysis, and the problems of poor ground object information effectiveness, insufficient precision, partial ground object information deficiency and the like inevitably influence the accuracy of a wireless network planning design scheme. In addition, the base station currently has only one position information, which is provided by the global positioning system (Global Positioning System, GPS) antenna of the base station, and in practice, the distance between the antennas of several cells in the same base station often reaches or even exceeds several tens of meters, and the sharing of one longitude and latitude position information by several pairs of antennas also reduces the accuracy of the scheme.
On the other hand, when planning and designing a wireless network, there is a problem that the display capability is limited. In the planning and design of a wireless network, network analysis and scheme formulation are generally performed by adopting a common personal computer (Personal Computer, PC) mode. The information display capability of the common PC is very limited, so that a planner cannot comprehensively observe the wireless network environment and the performance condition of the wireless network environment, and the network state information is difficult to comprehensively and intuitively analyze, so that the working efficiency is influenced. And the planning designer is required to perform on-site survey, mainly comprising preparation of survey tools (such as vehicles, intelligent survey terminals, GPS (global positioning system) positioners, digital cameras and the like), recording on-site survey conditions (such as antenna on-site environment photo records, antenna on-site size and direction positioning records and the like), arranging survey information (inputting information such as on-site photos and position sizes and the like into a PC), drawing design drawings and the like, and the workload is large.
Disclosure of Invention
The embodiment of the invention provides a video information processing method and a server, which are used for solving the problems of inaccurate ground object information acquisition and limited display capability in the wireless network planning and design in the prior art.
In one aspect, an embodiment of the present invention provides a video information processing method, applied to a server, where the method includes:
receiving video information transmitted by a base station in a target area, and extracting ground objects in the video information; the video information is panoramic video information collected at a preset position of a cell of the base station;
determining the ground object information of each ground object; the ground object information at least comprises position information, three-dimensional information and material information of the ground object;
and generating a three-dimensional model of the target area according to the ground feature information and a first preset algorithm, and sending the three-dimensional model to a Virtual Reality (VR) terminal.
On the other hand, the embodiment of the invention provides a video information processing method, which is applied to a cell of a base station, and comprises the following steps:
collecting video information of the cell; the video information is panoramic video information collected at a preset position of the cell;
And sending the video information to a server.
On the other hand, an embodiment of the present invention provides a video information processing method, which is applied to a virtual reality VR terminal, including:
receiving a three-dimensional model of a target area sent by a server; the three-dimensional model is generated by the server according to video information transmitted by a base station in the target area, and the video information is panoramic video information acquired at a preset position of a cell of the base station;
acquiring current network performance parameters in the target area;
and superposing the current network performance parameters into the three-dimensional model.
In another aspect, an embodiment of the present invention provides a server, including:
the receiving module is used for receiving video information transmitted by a base station in a target area and extracting ground objects in the video information;
the determining module is used for determining the ground object information of each ground object; the ground object information at least comprises position information, three-dimensional information and material information of the ground object;
and the processing module is used for generating a three-dimensional model of the target area according to the ground feature information and a first preset algorithm and sending the three-dimensional model to a virtual reality VR terminal.
In another aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, a bus, and a computer program stored in the memory and capable of running on the processor, where the steps in the video information processing method are implemented when the processor executes the program.
In yet another aspect, embodiments of the present invention further provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the video information processing method described above.
According to the video information processing method and the server provided by the embodiment of the invention, the ground feature in the video information is extracted by receiving the video information transmitted by the base station in the target area, and the ground feature information of each ground feature is determined, so that the ground feature and the ground feature information can be accurately and comprehensively acquired; according to the ground feature information and a first preset algorithm, a three-dimensional model of the target area is generated, the three-dimensional model is sent to the VR terminal, the three-dimensional model is displayed by combining the VR terminal, the ground feature condition of the target area can be intuitively displayed, and the technical effect of on-site investigation of planning designers is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a video information processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an application scenario according to an embodiment of the present invention;
fig. 3 is a flowchart of a video information processing method according to another embodiment of the present invention;
FIG. 4 is a schematic view of a scenario of a specific example of the present invention;
fig. 5 is a flowchart of a video information processing method according to another embodiment of the present invention;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a base station according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a VR terminal according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic flow chart of a video information processing method according to an embodiment of the present invention.
As shown in fig. 1, the video information processing method provided by the embodiment of the present invention is applied to a server, and specifically includes the following steps:
step 101, receiving video information transmitted by a base station in a target area, and extracting ground objects in the video information; the video information is panoramic video information collected at a preset position of a cell of the base station.
Specifically, referring to fig. 2, fig. 2 is a schematic application scenario of an embodiment of the present invention, where a cell is a cell of a base station in a target area, the cell collects video information and transmits the video information to a server through the base station, and the server receives the video information transmitted by the base station and extracts feature information in the video information.
The panoramic video information is video information of 720-degree or 360-degree panoramic video, namely video files or video data, and real scenes around the base station can be comprehensively and real-timely acquired through the panoramic video information, so that lack of ground object information is avoided.
The ground object is a generic term for various physical objects on the ground, which generally refers to objects relatively fixed on the earth's surface, and physical objects such as mountains, forests, buildings, etc., and physical objects such as provinces, counties, etc., which are physical objects. The server receives panoramic video information transmitted by the base station and extracts ground object information in the video information, and the real-time performance and the comprehensiveness of the panoramic video information are achieved, so that the accuracy of ground object information acquisition is improved.
102, determining the ground object information of each ground object; the ground object information at least comprises position information, three-dimensional information and material information of the ground object.
According to the video information, the server can determine the ground object information of the ground object included in the video by utilizing an artificial intelligence algorithm, wherein the ground object information at least comprises the position information, the three-dimensional information and the material information of the ground object; the position information is GPS information, and the three-dimensional information includes data of three dimensions of length, width and height, and the material information is used for representing the type of the ground object, such as trees, glass, reinforced concrete, and the like.
And step 103, generating a three-dimensional model of the target area according to the ground feature information and a first preset algorithm, and sending the three-dimensional model to a virtual reality VR terminal.
After the ground feature and the ground feature information thereof in the target area are determined, a three-dimensional model is generated according to a first preset algorithm, each ground feature is displayed in the three-dimensional model according to the respective ground feature information, and with continued reference to fig. 2, the server further transmits the three-dimensional model to the VR terminal, and the actual environment of the target area can be truly displayed through the VR terminal, so that the on-site investigation work of planning designers can be replaced.
The first preset algorithm may be Image-based modeling and rendering (IBMR-Based Modeling and Rendering) or other algorithms.
And the technical effect of on-site investigation of planning designers is achieved by establishing a three-dimensional model to display the ground feature information of the target area.
The method can accurately and comprehensively acquire the ground object and the ground object information, intuitively display the ground object condition of the target area, and achieve the technical effect of on-site investigation of planning and designing personnel.
Optionally, in a specific embodiment of the present invention, step 102 includes:
determining position information in the ground object according to the video information for each ground object;
acquiring target information of the ground object in video information of each base station in the target area according to the position information;
and determining the three-dimensional information and the material information of the ground object according to the target information and a second preset algorithm.
The target area receives video information transmitted by each base station in the target area after the server determines the target area in step 101. And a plurality of features exist in the target area, after determining all the features, the server executes the following processes one by one for each feature:
Specifically, for each ground feature, first determining the position information of the ground feature, further acquiring the video information of the base station of the target area according to the position information, and searching the target information corresponding to the position, wherein the target information is the target information of the ground feature, and may be the video information from the same base station at different angles or may include the video information of different base stations.
And after determining all the target information, carrying out integration analysis on all the target information according to a second preset algorithm so as to determine the three-dimensional information and the material information of the ground feature. In addition, based on the comparative analysis of the video information, the position information of each cell antenna in the same base station relative to the GPS antenna can be obtained, so that the accurate position information of each antenna is obtained, and the accuracy of planning and designing schemes is improved.
Optionally, in the embodiment of the present invention, the server may further perform time dimension comparison analysis on video data in different periods to obtain dynamic variation information of the traffic and the people stream, where the variation information may be used to guide capacity dynamic planning of the wireless network, and may also be applied to other smart city applications, such as traffic control.
In the embodiment of the invention, the ground feature information of each ground feature is determined by receiving the video information transmitted by the base station in the target area, and the ground feature information can be accurately and comprehensively acquired; according to the ground feature information and a first preset algorithm, a three-dimensional model of the target area is generated, the three-dimensional model is sent to the VR terminal, the three-dimensional model is displayed by combining the VR terminal, the ground feature condition of the target area can be intuitively displayed, and the technical effect of on-site investigation of planning designers is achieved.
Fig. 3 is a schematic flow chart of a video information processing method according to another embodiment of the present invention.
As shown in fig. 3, the video information processing method provided by the embodiment of the present invention is applied to a cell of a base station, and specifically includes the following steps:
step 301, collecting video information of the cell; the video information is panoramic video information collected at a preset position of the cell.
The preset position can be any preset position in the cell where panoramic video information can be acquired, and a camera or other video acquisition equipment can be arranged at the preset position to acquire the panoramic video information. The panoramic video information is video information of 720-degree or 360-degree panoramic video, namely video files or video data, and real scenes around the base station can be comprehensively and real-timely acquired through the panoramic video information, so that lack of ground object information is avoided. The accuracy of the acquisition of the ground object information can be improved by improving the accuracy of video acquisition.
Step 302, transmitting the video information to a server.
Wherein the video information is transmitted to the server, and can be transmitted by using the existing metropolitan area transmission network in the network, such as packet transport network (Packet Transport Network, PTN).
The ground object information is used as basic data of network simulation analysis in the wireless network planning and design process, so that the effectiveness, precision and integrity of the ground object information are improved, and the accuracy of the wireless network planning and design scheme can be improved.
Optionally, in an embodiment of the present invention, after step 301, the method includes:
performing first preset data processing on the original data of the video information, and transmitting the processed first processed data to radio frequency equipment of a base station to which the cell belongs;
and enabling the radio frequency device to transmit the first processing data to the baseband device of the base station through a common public radio interface CPRI or an Ethernet-based common public radio interface ECPRI.
As a specific example, referring to fig. 4, the cell performs a first preset data processing on the raw data of the acquired video information, converts the first processed data into an input information format of a standardized common public radio interface (Common Public Radio Interface, CPRI) of a third generation partnership project (3rd Generation Partnership Project,3GPP) or a standardized common public radio interface (Ethernet Common Public Radio Interface, ECPRI) based on ethernet, then directly feeds the first processed data into a radio frequency RF device by adopting a wired connection manner, and causes the radio frequency device to transmit the first processed data and other common mobile terminal user information to a baseband device of a base station together through the radio frequency device through the CPRI or the ECPRI.
In the embodiment of the invention, the existing wired transmission resources of the CPRI or ECPRI interface of the existing network are utilized to transmit back video information, so that the transmission capacity is improved, the high-speed transmission of the video information can be ensured, and meanwhile, precious network air interface resources are not occupied and the use of common mobile users is not influenced. Specifically, from the radio frequency portion to the baseband portion, the optical fibers and optical modules of the existing CPRI or ECPRI can be utilized; according to different video quality requirements, the bandwidth requirement is from a few megabits to hundreds of megabits/s, and the 5G network is expected to be above 25Gbps (1000 megabits per second) due to the strong bandwidth capability of the optical fiber and the optical module, so that the back transmission of video information can be ensured.
Optionally, in an embodiment of the present invention, step 302 includes:
and performing second preset data processing on the first processed data through the baseband equipment of the base station, and transmitting the processed second processed data to a server through the baseband equipment.
With continued reference to fig. 4, the baseband device performs a second preset data process on the received first processing data, converts the first processing data into second processing data that meets a data transmission requirement between the base station and the server, and transmits the second processing data to the server. Alternatively, from the base band portion of the base station to the server, transmissions may be made using an existing metropolitan area transmission network in the network, such as a packet transport network (Packet Transport Network, PTN).
Further, in an embodiment of the present invention, the step of causing the radio frequency device to transmit the first processing data to a baseband device of the base station through a common public radio interface CPRI or an ethernet-based common public radio interface ECPRI includes:
and when the current traffic of the cell meets the preset condition, the radio frequency equipment transmits the first processing data to the baseband equipment of the base station through a common public radio interface CPRI or an Ethernet-based common public radio interface ECPRI.
In order to reduce the influence of the collection and the feedback of the video information (the radio frequency equipment transmits to the baseband equipment) on the common user, the video collection and the feedback can be performed when the traffic of the cell meets the preset condition according to the real-time uplink utilization rate of the cell and the preset uplink utilization rate threshold value.
Specifically, the traffic situation of the cell may be judged according to the following formula 1:
equation 1:
T=Max(S1,M1)×M2-S2;
wherein, T is a traffic condition parameter, when the T value is greater than or equal to 0, determining that the current traffic of the cell meets a preset condition, otherwise, not meeting the preset condition;
S1 is a preset uplink utilization threshold, S2 is a real-time uplink utilization;
m1 and M2 are preset parameters, and M1 can be set to 40% and M2 can be set to 50%.
In the embodiment of the invention, the video information is transmitted back to the server by collecting the video information of the cell of the target area and utilizing the existing wired transmission resource of the network, so that the server generates a three-dimensional model of the target area by combining the position information of the base station and the video information, and the accuracy of the wireless propagation model is improved. Meanwhile, the acquisition frequency of the video information can be obviously improved, for example, the video information is set to be once a week, so that the real-time performance of the ground object information is improved. In the embodiment of the invention, the existing wired transmission resources of the CPRI or ECPRI interface of the existing network are utilized to transmit back the video information, so that the transmission capacity is improved, the high-speed transmission of the video information can be ensured, and meanwhile, precious network air interface resources are not occupied and the use of common mobile users is not influenced.
Fig. 5 is a schematic flow chart of a video information processing method according to still another embodiment of the present invention.
As shown in fig. 5, the video information processing method provided by the embodiment of the present invention, a virtual reality VR terminal, includes:
step 501, receiving a three-dimensional model of a target area sent by a server; the three-dimensional model is generated by the server according to video information transmitted by a base station in the target area, and the video information is panoramic video information acquired at a preset position of a cell of the base station.
The Virtual Reality technology (VR) is a computer simulation system capable of creating and experiencing a Virtual world, and generates a simulation environment by using a computer, so that the Virtual Reality technology is a system simulation of a multi-source information fusion and interactive three-dimensional dynamic view and entity behaviors, and gives a user the feeling of being in a real environment. And the VR terminal receives the three-dimensional model sent by the server, wherein the three-dimensional model comprises the ground feature of the target area. The three-dimensional model is generated by a server according to video information transmitted by a base station in the target area, and the video information is panoramic video information acquired at a preset position of a cell of the base station. The panoramic video information is video information of 720-degree or 360-degree panoramic video, namely video files or video data, and real scenes around the base station can be comprehensively and real-timely acquired through the panoramic video information, so that lack of ground object information is avoided.
Step 502, obtaining the performance parameters of the current network in the target area.
Wherein, the network performance parameters of the present network include: and when the VR terminal receives the three-dimensional model of the target area, acquiring the current network performance parameters of the target area.
And step 503, overlaying the current network performance parameters into the three-dimensional model.
The VR terminal superimposes the network performance parameters of the current network into the three-dimensional model, so that planning designers can accurately evaluate the current network state of the target area in all directions, and perform dynamic simulation according to the three-dimensional model at the moment so as to accurately perfect the planning scheme.
Alternatively, taking field strength as a specific example, in dynamic simulation, coverage prediction may be performed according to the following equation 2:
equation 2:
Su=Se+ΔSm+ΔSs;
the Su is an updated field intensity value, se is a current field intensity value, deltaSm is field intensity change caused by three-dimensional model change, deltaSs is field intensity change value caused by base station scheme change (wherein DeltaSm and DeltaSs can be monitored in real time), and only the field intensity change part is needed to be calculated through the formula 2, and the whole network is not needed to be re-simulated, so that the calculated amount and calculation time delay are reduced, and the dynamic real-time simulation is realized, and the working efficiency is improved.
In the embodiment of the invention, the three-dimensional model of the target area sent by the server is received, the current network performance parameters in the target area are obtained, the current network performance parameters are superimposed into the three-dimensional model, the actual environment of the target area can be truly displayed through the VR terminal, the site survey work of planning designers can be replaced, the situation that the planning operators can accurately analyze the network performance state in an omnibearing manner is realized, the planning design scheme can be effectively output, and the problem that the display capability is limited in the planning and design of the wireless network in the prior art is solved.
Having described the video information processing method provided by the embodiment of the present invention, devices, such as a server, a base station, and a VR terminal, which apply the video information processing method provided by the embodiment of the present invention will be described below with reference to the accompanying drawings.
Referring to fig. 6, an embodiment of the present invention provides a server 600, the server 600 including:
the receiving module 601 is configured to receive video information transmitted by a base station in a target area, and extract a ground object in the video information; the video information is panoramic video information collected at a preset position of a cell of the base station.
Specifically, referring to fig. 2, fig. 2 is a schematic application scenario of an embodiment of the present invention, where a cell is a cell of a base station in a target area, the cell collects video information and transmits the video information to a server 600 through the base station, and the server 600 receives the video information transmitted by the base station and extracts feature information in the video information.
The panoramic video information is video information of 720-degree or 360-degree panoramic video, namely video files or video data, and real scenes around the base station can be comprehensively and real-timely acquired through the panoramic video information, so that lack of ground object information is avoided.
The ground object is a generic term for various physical objects and intangibles on the ground, and generally refers to objects relatively fixed on the surface of the earth, such as mountains, forests, buildings and the like, as physical objects, such as provinces, counties and the like. The server 600 receives panoramic video information transmitted by the base station, and extracts feature information in the video information, so that the feature extraction has real-time performance and comprehensiveness due to the real-time performance and comprehensiveness of the panoramic information, and the accuracy of feature information acquisition is improved.
A determining module 602, configured to determine feature information of each feature; the ground object information at least comprises position information, three-dimensional information and material information of the ground object.
Wherein, according to the video information, the server 600 may determine the feature information of the feature included in the video by using an artificial intelligence algorithm, where the feature information at least includes position information, three-dimensional information and material information of the feature; the position information is GPS information, and the three-dimensional information includes data of three dimensions of length, width and height, and the material information is used for representing the type of the ground object, such as trees, glass, reinforced concrete, and the like.
The processing module 603 is configured to generate a three-dimensional model of the target area according to the ground feature information and a first preset algorithm, and send the three-dimensional model to a virtual reality VR terminal.
After determining the ground object and the ground object information thereof in the target area, a three-dimensional model is generated according to a first preset algorithm, each ground object is displayed in the three-dimensional model according to the respective ground object information, and with continued reference to fig. 2, the server 600 further transmits the three-dimensional model to the VR terminal, and the actual environment of the target area can be truly displayed through the VR terminal, so that the on-site survey work of planning designers can be replaced.
Optionally, in an embodiment of the present invention, the determining module 602 includes:
determining position information in the ground object according to the video information for each ground object;
acquiring target information of the ground object in video information of each base station in the target area according to the position information;
and determining the three-dimensional information and the material information of the ground object according to the target information and a second preset algorithm.
In the above embodiment of the present invention, the receiving module 601 receives the video information transmitted by the base station in the target area, extracts the ground objects in the video information, and the determining module 602 determines the ground object information of each ground object, so that the ground object and the ground object information can be accurately and comprehensively obtained; the processing module 603 generates a three-dimensional model of the target area according to the ground feature information and a first preset algorithm, sends the three-dimensional model to the VR terminal, combines the VR terminal to display the three-dimensional model, and can intuitively display the ground feature condition of the target area, so that the technical effect of on-site investigation of planning designers is achieved.
Referring to fig. 7, an embodiment of the present invention provides a base station 700, the base station 700 including:
An acquisition module 701, configured to acquire video information of the cell; the video information is panoramic video information collected at a preset position of the cell.
The preset position can be any preset position in the cell where panoramic video information can be acquired, and a camera or other video acquisition equipment can be arranged at the preset position to acquire the panoramic video information. The panoramic video information is video information of 720-degree or 360-degree panoramic video, namely video files or video data, and real scenes around the base station 700 can be comprehensively and real-timely acquired through the panoramic video information, so that lack of ground feature information is avoided. The accuracy of the acquisition of the ground object information can be improved by improving the accuracy of video acquisition.
A first transmission module 702, configured to transmit the video information to a server.
Wherein the video information is transmitted to the server, and can be transmitted by using the existing metropolitan area transmission network in the network, such as packet transport network (Packet Transport Network, PTN).
The ground object information is used as basic data of network simulation analysis in the wireless network planning and design process, so that the effectiveness, precision and integrity of the ground object information are improved, and the accuracy of the wireless network planning and design scheme can be improved.
Optionally, in an embodiment of the present invention, the base station 700 further includes:
the first processing module is configured to perform a first preset data processing on the original data of the video information, and transmit the processed first processed data to a radio frequency device of the base station 700 to which the cell belongs;
and a second transmission module, configured to enable the radio frequency device to transmit the first processing data to a baseband device of the base station 700 through a common public radio interface CPRI or an ethernet-based common public radio interface ECPRI.
Optionally, in an embodiment of the present invention, the transmission module is configured to:
and performing second preset data processing on the first processed data through the baseband equipment of the base station 700, and transmitting the processed second processed data to a server through the baseband equipment.
Optionally, in an embodiment of the present invention, the first processing module is configured to:
and when determining that the current traffic of the cell meets the preset condition according to the real-time uplink utilization rate of the cell and the preset uplink utilization rate threshold, transmitting the first processing data to the baseband equipment of the base station 700 through the radio frequency equipment.
In the above embodiment of the present invention, the acquisition module 701 acquires the video information of the cell of the target area, and the first transmission module 702 transmits the video information back to the server by using the existing wired transmission resource of the network, so that the server generates the three-dimensional model of the target area by combining the position information of the base station 700 and the video information, and the accuracy of the wireless propagation model is improved. Meanwhile, the acquisition frequency of the video information can be obviously improved, for example, the video information is set to be once a week, so that the real-time performance of the ground object information is improved. In the embodiment of the invention, the second transmission module utilizes the existing wired transmission resources of the CPRI or ECPRI interface of the existing network to transmit back the video information, thereby improving the transmission back capability, ensuring the high-speed transmission back of the video information, simultaneously not occupying valuable network air interface resources and not affecting the use of common mobile users.
Referring to fig. 8, an embodiment of the present invention provides a virtual reality VR terminal 800, including:
the model receiving module 801 is configured to receive a three-dimensional model of a target area sent by a server, where the three-dimensional model is generated by the server according to video information transmitted by a base station in the target area, and the video information is panoramic video information collected at a preset position of a cell of the base station.
The Virtual Reality technology (VR) is a computer simulation system capable of creating and experiencing a Virtual world, and generates a simulation environment by using a computer, so that the Virtual Reality technology is a system simulation of a multi-source information fusion and interactive three-dimensional dynamic view and entity behaviors, and gives a user the feeling of being in a real environment. The VR terminal 700 receives the three-dimensional model sent by the server, where the three-dimensional model includes the ground object of the target area. The three-dimensional model is generated by a server according to video information transmitted by a base station in the target area, and the video information is panoramic video information acquired at a preset position of a cell of the base station. The panoramic video information is video information of 720-degree or 360-degree panoramic video, namely video files or video data, and real scenes around the base station can be comprehensively and real-timely acquired through the panoramic video information, so that lack of ground object information is avoided.
A parameter obtaining module 802, configured to obtain the current network performance parameter in the target area.
Wherein, the network performance parameters of the present network include: parameters such as signal strength, throughput, field intensity, and the like, and when VR terminal 800 receives the three-dimensional model of the target area, the current network performance parameters of the target area are obtained.
And the superposition module 803 is used for superposing the current network performance parameters into the three-dimensional model.
The VR terminal 800 superimposes the current network performance parameter on the three-dimensional model, so that a planning designer can accurately evaluate the current network state of the target area in all directions, and perform dynamic simulation according to the three-dimensional model at this time, so as to accurately perfect the planning scheme.
In the above embodiment of the present invention, the model receiving module 801 receives the three-dimensional model of the target area sent by the server, the parameter obtaining module 802 obtains the current network performance parameter in the target area, and the current network performance parameter is superimposed into the three-dimensional model through the superimposing module 803, so that the actual environment of the target area can be truly represented through the VR terminal 800, the site survey work of the planning designer can be replaced, the planning designer can comprehensively and accurately analyze the network performance status, the planning design scheme can be effectively output, and the problem of limited display capability in the wireless network planning and design in the prior art can be solved.
Fig. 9 is a schematic structural diagram of an electronic device according to another embodiment of the present invention.
Referring to fig. 9, an electronic device according to an embodiment of the present invention includes a memory 91, a processor 92, a bus 93, and a computer program stored on the memory 91 and executable on the processor. Wherein the memory 91 and the processor 92 communicate with each other via the bus 93.
The processor 92 is configured to invoke program instructions in the memory 91 to implement the method of fig. 1 when executing the program.
In another embodiment, the processor, when executing the program, implements the following method:
receiving video information transmitted by a base station in a target area, and extracting ground objects in the video information; the video information is panoramic video information collected at a preset position of a cell of the base station;
determining the ground object information of each ground object; the ground object information at least comprises position information, three-dimensional information and material information of the ground object;
and generating a three-dimensional model of the target area according to the ground feature information and a first preset algorithm, and sending the three-dimensional model to a Virtual Reality (VR) terminal.
The electronic device provided in the embodiment of the present invention may be used to execute the program corresponding to the method in the embodiment of the method, and this implementation is not repeated.
According to the electronic equipment provided by the embodiment of the invention, the server receives the video information transmitted by the base station in the target area, extracts the ground features in the video information, and determines the ground feature information of each ground feature, so that the ground feature information and the ground feature information can be accurately and comprehensively acquired; according to the ground feature information and a first preset algorithm, a three-dimensional model of the target area is generated, the three-dimensional model is sent to the VR terminal, the three-dimensional model is displayed by combining the VR terminal, the ground feature condition of the target area can be intuitively displayed, and the technical effect of on-site investigation of planning designers is achieved.
A further embodiment of the invention provides a non-transitory computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps as in fig. 1.
In another embodiment, the program when executed by a processor implements the method of:
receiving video information transmitted by a base station in a target area, and extracting ground objects in the video information; the video information is panoramic video information collected at a preset position of a cell of the base station;
Determining the ground object information of each ground object; the ground object information at least comprises position information, three-dimensional information and material information of the ground object;
and generating a three-dimensional model of the target area according to the ground feature information and a first preset algorithm, and sending the three-dimensional model to a Virtual Reality (VR) terminal.
The non-transitory computer readable storage medium provided in the embodiment of the present invention realizes the method of the above method embodiment when the program is executed by the processor, and this implementation is not repeated.
According to the non-transitory computer readable storage medium provided by the embodiment of the invention, the ground feature in the video information is extracted by receiving the video information transmitted by the base station in the target area, and the ground feature information of each ground feature is determined, so that the ground feature and the ground feature information can be accurately and comprehensively acquired; according to the ground feature information and a first preset algorithm, a three-dimensional model of the target area is generated, the three-dimensional model is sent to the VR terminal, the three-dimensional model is displayed by combining the VR terminal, the ground feature condition of the target area can be intuitively displayed, and the technical effect of on-site investigation of planning designers is achieved.
Yet another embodiment of the present invention discloses a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, are capable of performing the methods provided by the above-described method embodiments, for example comprising:
Receiving video information transmitted by a base station in a target area, and extracting ground objects in the video information; the video information is panoramic video information collected at a preset position of a cell of the base station;
determining the ground object information of each ground object; the ground object information at least comprises position information, three-dimensional information and material information of the ground object;
and generating a three-dimensional model of the target area according to the ground feature information and a first preset algorithm, and sending the three-dimensional model to a Virtual Reality (VR) terminal.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. A video information processing method applied to a server, the method comprising:
receiving video information transmitted by a base station in a target area, and extracting ground objects in the video information; the video information is panoramic video information acquired by a camera arranged at a preset position of a cell of the base station; the video information comprises real-scene information around the base station;
the video information also comprises position information of each cell antenna of the cell of the base station;
determining the ground object information of each ground object; the ground object information at least comprises position information, three-dimensional information and material information of the ground object;
Generating a three-dimensional model of the target area according to the ground feature information and a first preset algorithm, and sending the three-dimensional model to a Virtual Reality (VR) terminal;
wherein the transmission of the video information comprises:
the cell of the base station carries out first preset data processing on the original data of the video information, and transmits the processed first processed data to the radio frequency equipment of the base station to which the cell belongs;
the radio frequency device transmits the first processing data to a baseband device of the base station through a common public radio interface CPRI or an Ethernet-based common public radio interface ECPRI;
the base band equipment of the base station carries out second preset data processing on the first processing data and transmits the processed second processing data to the server;
after receiving the video information transmitted by the base station in the target area, the method further comprises the following steps:
acquiring dynamic change information according to the last sampling moment and video information transmitted by a base station in a current sampling receiving target area; the dynamic change information comprises people flow change information and traffic flow change information;
acquiring dynamic planning information based on the dynamic change information; the dynamic planning information is used for guiding the capacity dynamic planning of the wireless network of the base station in the target area;
After sending the three-dimensional model to the virtual reality VR terminal, further comprising:
the virtual reality VR terminal acquires current network performance parameters in the target area;
and updating the three-dimensional model according to the network performance parameters of the current network.
2. The method of claim 1, wherein the step of determining the clutter information for each clutter comprises:
determining position information in the ground object according to the video information for each ground object;
acquiring target information of the ground object in video information of each base station in the target area according to the position information;
and determining the three-dimensional information and the material information of the ground object according to the target information and a second preset algorithm.
3. A video information processing method applied to a cell of a base station, the method comprising:
collecting video information of the cell; the video information is panoramic video information acquired by a camera arranged at a preset position of the cell; the video information comprises real-scene information around the base station;
the video information also comprises position information of each cell antenna of the cell of the base station;
Performing first preset data processing on the original data of the video information, and transmitting the processed first processed data to radio frequency equipment of a base station to which the cell belongs;
causing the radio frequency device to transmit the first processing data to a baseband device of the base station via a common public radio interface CPRI or an ethernet-based common public radio interface ECPRI;
performing second preset data processing on the first processed data through the baseband equipment of the base station, and transmitting the processed second processed data to a server through the baseband equipment;
after collecting the video information of the cell, the method further comprises:
the server acquires dynamic change information according to the last sampling moment and video information transmitted by a base station in a current sampling receiving target area; the dynamic change information comprises people flow change information and traffic flow change information;
acquiring dynamic planning information based on the dynamic change information; the dynamic planning information is used for guiding the capacity dynamic planning of the wireless network of the base station in the target area;
after the server sends the three-dimensional model to the virtual reality VR terminal, the method further includes:
the virtual reality VR terminal acquires current network performance parameters in the target area;
And updating the three-dimensional model according to the network performance parameters of the current network.
4. A method according to claim 3, characterized in that the step of causing the radio frequency device to transmit the first processing data to a baseband device of the base station via a common public radio interface CPRI or an ethernet-based common radio interface ECPRI comprises:
and when the current traffic of the cell meets the preset condition, the radio frequency equipment transmits the first processing data to the baseband equipment of the base station through a common public radio interface CPRI or an Ethernet-based common public radio interface ECPRI.
5. A video information processing method applied to a virtual reality VR terminal, the method comprising:
receiving a three-dimensional model of a target area sent by a server; the three-dimensional model is generated by the server according to video information transmitted by a base station in the target area, and the video information is panoramic video information arranged at a preset position of a cell of the base station; the video information comprises real-scene information around the base station;
The video information also comprises position information of each cell antenna of the cell of the base station;
acquiring current network performance parameters in the target area;
updating the three-dimensional model according to the network performance parameters of the existing network;
wherein the transmission of the video information comprises:
the cell of the base station carries out first preset data processing on the original data of the video information, and transmits the processed first processed data to the radio frequency equipment of the base station to which the cell belongs;
the radio frequency device transmits the first processing data to a baseband device of the base station through a common public radio interface CPRI or an Ethernet-based common public radio interface ECPRI;
the base band equipment of the base station carries out second preset data processing on the first processing data and transmits the processed second processing data to the server;
after the cell of the base station transmits the acquired video information of the cell to the server, the method further comprises the following steps:
the server acquires dynamic change information according to the last sampling moment and video information transmitted by a base station in a current sampling receiving target area; the dynamic change information comprises people flow change information and traffic flow change information;
Acquiring dynamic planning information based on the dynamic change information; the dynamic planning information is used for guiding the capacity dynamic planning of the wireless network of the base station in the target area.
6. A server, the server comprising:
the receiving module is used for receiving video information transmitted by a base station in a target area and extracting ground objects in the video information; the video information is panoramic video information acquired by a camera arranged at a preset position of a cell of the base station; the video information comprises real-scene information around the base station;
the video information also comprises position information of each cell antenna of the cell of the base station;
the determining module is used for determining the ground object information of each ground object; the ground object information at least comprises position information, three-dimensional information and material information of the ground object;
the processing module is used for generating a three-dimensional model of the target area according to the ground feature information and a first preset algorithm and sending the three-dimensional model to a Virtual Reality (VR) terminal;
wherein the transmission of the video information comprises:
the cell of the base station carries out first preset data processing on the original data of the video information, and transmits the processed first processed data to the radio frequency equipment of the base station to which the cell belongs;
The radio frequency device transmits the first processing data to a baseband device of the base station through a common public radio interface CPRI or an Ethernet-based common public radio interface ECPRI;
the base band equipment of the base station carries out second preset data processing on the first processing data and transmits the processed second processing data to the server;
after receiving the video information transmitted by the base station in the target area, the method further comprises the following steps:
acquiring dynamic change information according to the last sampling moment and video information transmitted by a base station in a current sampling receiving target area; the dynamic change information comprises people flow change information and traffic flow change information;
acquiring dynamic planning information based on the dynamic change information; the dynamic planning information is used for guiding the capacity dynamic planning of the wireless network of the base station in the target area;
after sending the three-dimensional model to the virtual reality VR terminal, further comprising:
the virtual reality VR terminal acquires current network performance parameters in the target area;
and updating the three-dimensional model according to the network performance parameters of the current network.
7. An electronic device comprising a memory, a processor, a bus, and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the video information processing method of any one of claims 1-5 when the program is executed.
8. A non-transitory computer readable storage medium having a computer program stored thereon, characterized by: the program, when executed by a processor, implements the steps in the video information processing method as claimed in any one of claims 1 to 5.
CN201810380249.8A 2018-04-25 2018-04-25 Video information processing method and server Active CN110400373B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810380249.8A CN110400373B (en) 2018-04-25 2018-04-25 Video information processing method and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810380249.8A CN110400373B (en) 2018-04-25 2018-04-25 Video information processing method and server

Publications (2)

Publication Number Publication Date
CN110400373A CN110400373A (en) 2019-11-01
CN110400373B true CN110400373B (en) 2023-11-03

Family

ID=68322033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810380249.8A Active CN110400373B (en) 2018-04-25 2018-04-25 Video information processing method and server

Country Status (1)

Country Link
CN (1) CN110400373B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113595779B (en) * 2021-07-23 2023-03-24 中国电信股份有限公司 Method, apparatus, medium, and network analysis system for acquiring data for network analysis

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090195655A1 (en) * 2007-05-16 2009-08-06 Suprabhat Pandey Remote control video surveillance apparatus with wireless communication
US10856153B2 (en) * 2015-04-14 2020-12-01 ETAK Systems, LLC Virtual 360-degree view modification of a telecommunications site for planning, engineering, and installation
US10354441B2 (en) * 2016-05-20 2019-07-16 ETAK Systems, LLC Augmented reality systems and methods for telecommunications site modeling
CN107102728B (en) * 2017-03-28 2021-06-18 北京犀牛数字互动科技有限公司 Display method and system based on virtual reality technology
CN107222467B (en) * 2017-05-16 2020-05-26 三维通信股份有限公司 Method for realizing mobile communication base station panoramic operation and maintenance system
CN107040933B (en) * 2017-05-16 2023-07-04 三维通信股份有限公司 Panoramic operation and maintenance system of mobile communication base station

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Zhenhai Zhang ; Kejie Li. Study on algorithm for panoramic image basing on high sensitivity and high resolution panoramic surveillance camera.2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance.2013,全文. *
无人机航拍、VR全景和三维重建在基站查勘中的应用;曹辉;何建勋;;通信世界(第32期);全文 *
李玉柏.面向4G通信的高性能计算技术.电子科技大学出版社,2014,71-72. *

Also Published As

Publication number Publication date
CN110400373A (en) 2019-11-01

Similar Documents

Publication Publication Date Title
Baidya et al. FlyNetSim: An open source synchronized UAV network simulator based on ns-3 and ardupilot
CN111126618B (en) Federal learning method and device based on multi-source heterogeneous system
Al-Hourani On the probability of line-of-sight in urban environments
CN106157354B (en) A kind of three-dimensional scenic switching method and system
CN101975894B (en) 4D (Four Dimensional) thunder collecting method of sensor network
CN102638889A (en) Indoor wireless terminal positioning method based on Bayes compression sensing
JP6696859B2 (en) Quality estimation device and quality estimation method
CN110400373B (en) Video information processing method and server
CN113873532A (en) Intelligent park 5G network planning method
KR101992672B1 (en) Visual impact analyzing method for overhead transmitting line
CN105357654B (en) A kind of emergency communication service system and its working method based on the Big Dipper
CN117234107A (en) Multi-unmanned aerial vehicle ad hoc network simulation system and method based on illusion engine
KR102547748B1 (en) Server, method and computer program for managing digital twin model
Yan et al. Intergrating UAV development technology with augmented reality toward landscape tele-simulation
CN113362023A (en) Data sharing method for assisting urban and rural planning and compiling
Black et al. NetSimulyzer: A 3D network simulation analyzer for NS-3
CN113784107A (en) Three-dimensional visual display method and system for video signal
CN104183011A (en) Three-dimensional interactive virtual reality (3D IVR) restoring system
CN113379908A (en) Three-dimensional GISVR circuit live-action platform building system for automatic inspection of power equipment
CN210405541U (en) Unmanned aerial vehicle live broadcast system
CN114387424A (en) Indoor distribution system fault positioning method, device, equipment and readable medium
CN110674234B (en) Map data acquisition method, apparatus and storage medium
CN117201584B (en) Supervision and inspection method and system based on geospatial data and real-time video
Seo et al. Constructing 3-dimensional 5G coverage map for real-time airborne missions
CN114501618B (en) Positioning model training method, positioning method, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant