CN108573522B - Display method of mark data and terminal - Google Patents

Display method of mark data and terminal Download PDF

Info

Publication number
CN108573522B
CN108573522B CN201710150737.5A CN201710150737A CN108573522B CN 108573522 B CN108573522 B CN 108573522B CN 201710150737 A CN201710150737 A CN 201710150737A CN 108573522 B CN108573522 B CN 108573522B
Authority
CN
China
Prior art keywords
data
information
drawn
projection
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710150737.5A
Other languages
Chinese (zh)
Other versions
CN108573522A (en
Inventor
江旻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710150737.5A priority Critical patent/CN108573522B/en
Publication of CN108573522A publication Critical patent/CN108573522A/en
Application granted granted Critical
Publication of CN108573522B publication Critical patent/CN108573522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a display method and a terminal of mark data, wherein the method comprises the following steps: acquiring first data, wherein the first data is used for identifying the position of at least one sampling point in the acquired area in a three-dimensional space; establishing a three-dimensional virtual space according to the first data; acquiring second data, wherein the second data is used for representing media information obtained by presenting a target object in a two-dimensional space in an acquired area; according to the extracted projection strategy, directly projecting the second data into the three-dimensional virtual space for displaying to obtain a display result containing third data; and distinguishing the mark attribute information of the target object by using the corresponding color identifier according to the third data.

Description

Display method of mark data and terminal
Technical Field
The present invention relates to data display technologies, and in particular, to a display method and a terminal for logo data.
Background
Road signs such as traffic lights, speed limit boards and the like can assist in traffic travel, and the road signs are combined with geographical position information of the traffic travel, so that a more comprehensive data display result can be provided for the traffic travel of a user. The preliminary identification of the road sign can be achieved by combining geometric feature extraction and pattern matching, but due to the fact that the accuracy of the method is not high, in order to ensure the accuracy of the data, a data presentation scheme needs to be provided for the later editing and detection of the data.
By adopting the data display scheme provided by the prior art, one scheme is as follows: in order to improve the accuracy of the data, the color information corresponding to the road sign data is attached to the initial point cloud data, so that the data amount to be processed is greatly increased, the time cost and the labor cost are both greatly invested, and the processing efficiency is low; the other scheme is as follows: the processing window needs to be frequently switched between the two-dimensional scene and the three-dimensional scene, data editing is not very utilized, and the processing efficiency is low; the other scheme is as follows: in the two-dimensional to three-dimensional mapping process, the identification precision of the road sign data is reduced due to the influence of noise interference and low sampling precision, and the expected processing efficiency cannot be achieved. However, in the related art, there is no effective solution to this problem.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and a terminal for displaying logo data, which at least solve the problems in the prior art.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a display method of mark data, which comprises the following steps:
acquiring first data, wherein the first data is used for identifying the position of at least one sampling point in the acquired area in a three-dimensional space;
establishing a three-dimensional virtual space according to the first data;
acquiring second data, wherein the second data is used for representing media information obtained by presenting a target object in a two-dimensional space in an acquired area;
projecting the second data into the three-dimensional virtual space for display according to the extracted projection strategy to obtain a display result containing third data;
and distinguishing the mark attribute information of the target object by using the corresponding color identifier according to the third data.
In the above scheme, the method further comprises:
and after the first data are collected, carrying out data re-blocking pretreatment operation on the first data.
In the foregoing solution, the performing a data re-blocking preprocessing operation on the first data includes:
acquiring the acquisition area where the first data is located;
dividing the acquisition region according to the designated region division parameters to obtain at least two first target regions;
expanding the boundaries of the at least two first target areas according to the area boundary enhancement parameters to obtain at least two second target areas;
and denoising and normal estimation are carried out on each sampling point in the at least two second target regions by taking k neighborhood points as references to obtain normal information, and the normal information is added into the first data.
In the foregoing solution, the performing a data re-blocking preprocessing operation on the first data further includes:
after the denoising and the normal estimation are carried out, when each second target area of the at least two second target areas is processed, the size parameter of the first target area is obtained;
and cutting each second target area according to the size parameter of the first target area so as to delete the redundant area beyond the size range of the first target area.
In the above scheme, the method further comprises:
acquiring first data containing the normal information;
and carrying out data encryption processing on the first data according to the normal information to obtain fourth data containing supplementary information.
In the above scheme, the method further comprises:
acquiring first data containing the normal information;
acquiring fourth data containing the augmentation information;
and taking a data set formed by the first data and the fourth data as data to be drawn, performing depth test on the data to be drawn according to projection parameters in the process of drawing, drawing to obtain each pixel point in the image, and recording the depth value of the current mapping area.
In the above scheme, projecting the second data into the three-dimensional virtual space for display according to the proposed projection strategy includes:
acquiring first data containing the normal information;
acquiring fourth data containing the augmentation information;
taking a data set formed by the first data and the fourth data as data to be drawn, extracting a projection texture mapping strategy, and judging whether coordinate system conversion is needed at present when the data to be drawn is mapped according to the projection texture mapping strategy;
when the coordinate system conversion is needed, firstly, the coordinate system conversion is carried out, after the coordinate system conversion is successful, the data to be drawn are drawn, and the texture coordinate corresponding to each coloring point in the data to be drawn is determined in real time;
and performing color adding processing on the points needing to be colored in the data to be drawn according to the texture coordinates to obtain the third data.
In the above scheme, performing color addition processing on the point to be colored in the data to be drawn according to the texture coordinate includes:
judging according to the texture coordinates corresponding to each coloring point to determine whether the points needing coloring in the data to be drawn are in a projection area, if so, determining the color values corresponding to the points needing coloring in the data to be drawn according to the comparison result obtained by comparing the first depth values of the points needing coloring in the data to be drawn in the projector coordinate system with the second depth values in the depth map, and performing the color adding processing on the points needing coloring in the data to be drawn according to the color values;
the depth map is an image obtained by drawing when depth test is carried out in the process of drawing the data to be drawn according to the projection parameters.
An embodiment of the present invention provides a terminal, where the terminal includes:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring first data, and the first data is used for identifying the position of at least one sampling point in an acquired area in a three-dimensional space;
the space modeling unit is used for establishing a three-dimensional virtual space according to the first data;
the acquisition unit is used for acquiring second data, and the second data is used for representing media information obtained by presenting the target object in a two-dimensional space in the acquired area;
the projection unit is used for projecting the second data into the three-dimensional virtual space for display according to the extracted projection strategy to obtain a display result containing third data;
and the identification unit is used for distinguishing the mark attribute information of the target object by corresponding color identification according to the third data.
In the foregoing solution, the terminal further includes:
and the preprocessing unit is used for carrying out data re-blocking preprocessing operation on the first data.
In the foregoing scheme, the preprocessing unit is further configured to:
acquiring the acquisition area where the first data is located;
dividing the acquisition region according to the designated region division parameters to obtain at least two first target regions;
expanding the boundaries of the at least two first target areas according to the area boundary enhancement parameters to obtain at least two second target areas;
and denoising and normal estimation are carried out on each sampling point in the at least two second target regions by taking k neighborhood points as references to obtain normal information, and the normal information is added into the first data.
In the foregoing scheme, the preprocessing unit is further configured to:
after the denoising and the normal estimation are carried out, when each second target area of the at least two second target areas is processed, the size parameter of the first target area is obtained;
and cutting each second target area according to the size parameter of the first target area so as to delete the redundant area beyond the size range of the first target area.
In the foregoing solution, the terminal further includes: an encryption unit to:
acquiring first data containing the normal information;
and carrying out data encryption processing on the first data according to the normal information to obtain fourth data containing supplementary information.
In the foregoing solution, the terminal further includes: a depth test unit to:
acquiring first data containing the normal information;
acquiring fourth data containing the augmentation information;
and taking a data set formed by the first data and the fourth data as data to be drawn, performing depth test on the data to be drawn according to projection parameters in the process of drawing, drawing to obtain each pixel point in the image, and recording the depth value of the current mapping area.
In the foregoing solution, the projection unit is further configured to:
acquiring first data containing the normal information;
acquiring fourth data containing the augmentation information;
taking a data set formed by the first data and the fourth data as data to be drawn, and judging whether coordinate system conversion is needed at present when the data to be drawn is mapped according to a projection texture mapping strategy according to the projection texture mapping strategy;
when the coordinate system conversion is needed, firstly, the coordinate system conversion is carried out, after the coordinate system conversion is successful, the data to be drawn are drawn, and the texture coordinate corresponding to each coloring point in the data to be drawn is determined in real time;
and performing color adding processing on the points needing to be colored in the data to be drawn according to the texture coordinates to obtain the third data.
In the foregoing solution, the projection unit is further configured to:
judging according to the texture coordinates corresponding to each coloring point to determine whether the points needing coloring in the data to be drawn are in a projection area, if so, determining the color values corresponding to the points needing coloring in the data to be drawn according to the comparison result obtained by comparing the first depth values of the points needing coloring in the data to be drawn in the projector coordinate system with the second depth values in the depth map, and performing the color adding processing on the points needing coloring in the data to be drawn according to the color values;
the depth map is an image obtained by drawing when depth test is carried out in the process of drawing the data to be drawn according to the projection parameters.
The method for displaying the road sign data comprises the following steps: acquiring first data, wherein the first data is used for identifying the position of at least one sampling point in the acquired area in a three-dimensional space; establishing a three-dimensional virtual space according to the first data; acquiring second data, wherein the second data is used for representing media information obtained by presenting a target object in a two-dimensional space in an acquired area; projecting the second data into the three-dimensional virtual space for display according to the extracted projection strategy to obtain a display result containing third data; and distinguishing the mark attribute information of the target object by using the corresponding color identifier according to the third data.
By adopting the embodiment of the invention, after the three-dimensional virtual space is established by the first data, because the second data is projected into the three-dimensional virtual space for displaying according to the projection strategy and the mark attribute information of the target object can be distinguished by the corresponding color identifier according to the third data in the display result containing the third data, the expected processing efficiency can be achieved without additionally increasing the processing data amount on the basis of ensuring the identification precision of the road mark data, thereby greatly improving the processing efficiency compared with the data display scheme in the prior art.
Drawings
Fig. 1 is a schematic diagram of an alternative hardware architecture of a mobile terminal implementing various embodiments of the present invention;
FIG. 2 is a schematic diagram of a communication system of the mobile terminal shown in FIG. 1;
FIG. 3 is a schematic flow chart illustrating a method according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating an implementation of another method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a terminal assembly according to an embodiment of the present invention;
FIGS. 6-7 are schematic diagrams of prior art display scenarios;
FIG. 8 is a schematic diagram of a virtual-real fusion display scenario to which an embodiment of the present invention is applied;
FIG. 9 is a flow chart of the point cloud data preprocessing using an embodiment of the present invention;
FIG. 10 is a block diagram of a point cloud data block according to an embodiment of the present invention;
FIG. 11 is a schematic overview of a display process applying an embodiment of the present invention;
FIG. 12 is a schematic view of a complete flow chart of a marker display using an embodiment of the present invention;
FIG. 13 is a schematic view of a point cloud projection using an embodiment of the present invention;
FIG. 14 is a flowchart illustrating a process of projection texture mapping according to an embodiment of the present invention.
Detailed Description
The following describes the embodiments in further detail with reference to the accompanying drawings.
A mobile terminal implementing various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the description of the embodiments of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks disclosed have not been described in detail as not to unnecessarily obscure aspects of the embodiments.
In addition, although the terms "first", "second", etc. are used herein several times to describe various elements (or various thresholds or various applications or various instructions or various operations), etc., these elements (or thresholds or applications or instructions or operations) should not be limited by these terms. These terms are only used to distinguish one element (or threshold or application or instruction or operation) from another element (or threshold or application or instruction or operation). For example, a first operation may be referred to as a second operation, and a second operation may be referred to as a first operation, without departing from the scope of the invention, the first operation and the second operation being operations, except that they are not the same operation.
The steps in the embodiment of the present invention are not necessarily processed according to the described step sequence, and may be optionally rearranged in a random manner, or steps in the embodiment may be deleted, or steps in the embodiment may be added according to requirements.
The term "and/or" in embodiments of the present invention refers to any and all possible combinations including one or more of the associated listed items. It is also to be noted that: when used in this specification, the term "comprises/comprising" specifies the presence of stated features, integers, steps, operations, elements and/or components but does not preclude the presence or addition of one or more other features, integers, steps, operations, elements and/or components and/or groups thereof.
The intelligent terminal (e.g., mobile terminal) of the embodiments of the present invention may be implemented in various forms. For example, the mobile terminal described in the embodiments of the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a Digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a navigation device, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Fig. 1 is a schematic diagram of an alternative hardware structure of a mobile terminal implementing various embodiments of the present invention. The mobile terminal 100 is not limited to a vehicle-mounted terminal or a mobile phone terminal.
When the mobile terminal 100 is a vehicle-mounted terminal, the method may include: the GPS positioning unit 111, the wireless communication unit 112, the wireless internet unit 113, the alarm communication unit 114, the map unit 121, the voice unit 122, the user input unit 130, the acquisition unit 140, the spatial modeling unit 141, the acquisition unit 142, the projection unit 143, the identification unit 144, the output unit 150, the display unit 151, the audio output unit 152, the storage unit 160, the interface unit 170, the processing unit 180, the power supply unit 190, and the like. Fig. 1 illustrates a mobile terminal having various components, but it is to be understood that not all illustrated components are required to be implemented. More or fewer components may alternatively be implemented. The elements of the in-vehicle terminal will be described in detail below.
The GPS positioning unit 111 is used for receiving information transmitted by satellites to check or acquire position information of the vehicle-mounted terminal, for example, performing single-satellite positioning or double-satellite positioning or the like according to the transmitted information to determine the position of the vehicle relative to the navigation path or the position of a certain lane on the navigation path or the like. Specifically, distance information and accurate time information from three or more satellites are calculated and triangulation is applied to the calculated information, thereby accurately calculating three-dimensional current location information according to longitude, latitude, and altitude. Currently, a method for calculating position and time information uses three satellites and corrects an error of the calculated position and time information by using another satellite. Further, the GPS positioning unit 111 can also calculate speed information by continuously calculating current position information in real time, and obtain vehicle speed information of the current vehicle.
A wireless communication unit 112 that allows radio communication between the in-vehicle terminal and a wireless communication system or network. For example, the wireless communication unit can perform communication in various forms, and can perform communication interaction with the background server in a broadcast form, a Wi-Fi communication form, a mobile communication (2G, 3G or 4G) form, and the like. When communication interaction is performed in a broadcast form, a broadcast signal and/or broadcast-related information may be received from an external broadcast management server via a broadcast channel. The broadcast channel may include a satellite channel and/or a terrestrial channel. The broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to a terminal. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like. Also, the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal. The broadcast associated information may also be provided via a mobile communication network. The broadcast signal may exist in various forms, for example, it may exist in the form of an Electronic Program Guide (EPG) of Digital Multimedia Broadcasting (DMB), an Electronic Service Guide (ESG) of Digital Video Broadcasting Handheld (DVB-H), and the like. The broadcast signal and/or broadcast associated information may be stored in the storage unit 160 (or other type of storage medium). Wi-Fi is a technology that can connect terminals such as personal computers and mobile terminals (such as vehicle-mounted terminals and mobile phone terminals) with each other in a wireless mode, and when a Wi-Fi communication form is adopted, a Wi-Fi hotspot can be accessed so as to access a Wi-Fi network. Wi-Fi hotspots are created by installing access points over internet connections. This access point transmits wireless signals over short distances, typically covering 300 feet. When the Wi-Fi-supported vehicle-mounted terminal encounters a Wi-Fi hotspot, the vehicle-mounted terminal can be wirelessly connected to a Wi-Fi network. In the form of mobile communication (2G, 3G, or 4G), radio signals are transmitted to and/or received from at least one of a base station (e.g., access point, node B, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received according to text and/or multimedia messages.
The wireless internet unit 113 supports various data transmission communication technologies of the in-vehicle terminal including wireless in order to access the internet. The unit may be internally or externally coupled to the in-vehicle terminal. The Wireless internet Access technology related to the unit may include a Wireless Local Area Network (WLAN), a Wireless broadband (Wibro), a worldwide interoperability for microwave Access (Wimax), a High Speed Downlink Packet Access (HSDPA), and the like.
And the alarm communication unit 114 is used for sending an alarm signal to the background server to notify the abnormal information of the vehicle. Specifically, the current vehicle position information obtained by the GPS positioning unit and the vehicle abnormal information are packaged and transmitted to a background server, such as an alarm or a monitoring center for processing. The map unit 121 is configured to store map information, where the map information may be map information that is downloaded online and then used offline, or map information that is downloaded in real time. The map information can also be updated in time. The voice unit 122 is configured to perform voice operation, on one hand, receive a voice command of a user, and on the other hand, perform voice broadcast in combination with a current vehicle position, navigation information, and a background processing result of vehicle abnormal information, so as to remind the user of paying attention to a road condition, and the like.
The vehicle-mounted terminal can be applied to 2G, 3G or 4G, wireless technologies and the like, supports high-speed data transmission, simultaneously transmits voice and data information, opens an interface, is unlimited in application, and can be easily matched with various I/O devices for use.
The user input unit 130 may generate key input data to control various operations of the in-vehicle terminal according to a command input by the user. The user input unit 130 allows a user to input various types of information, and may include a keyboard, a mouse, a touch pad (e.g., a touch-sensitive member that detects changes in resistance, pressure, capacitance, and the like due to being touched), a wheel, a joystick, and the like. In particular, when the touch pad is superimposed on the display unit 151 in the form of a layer, a touch screen may be formed.
An acquisition unit 140 configured to acquire first data for identifying a position in three-dimensional space of at least one sampling point in the acquired region; a space modeling unit 141, configured to establish a three-dimensional virtual space according to the first data; an obtaining unit 142, configured to obtain second data, where the second data is used to represent media information obtained by presenting a target object in a two-dimensional space in an acquired region; the projection unit 143 is configured to project the second data into the three-dimensional virtual space for display according to the extracted projection strategy, so as to obtain a display result including third data; an identifying unit 144, configured to distinguish the target object with corresponding color identification according to the third data, where the color identification is used for identifying the mark attribute information.
The interface unit 170 serves as an interface through which at least one external device is connected to the in-vehicle terminal. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification unit, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The identification unit may store various information for authenticating the User using the in-vehicle terminal and may include a User identification Unit (UIM), a Subscriber identification unit (SIM), a Universal Subscriber identification Unit (USIM), and the like. In addition, a device having an identification unit (hereinafter referred to as "identification device") may take the form of a smart card, and thus, the identification device may be connected with the in-vehicle terminal via a port or other connection means. The interface unit 170 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the in-vehicle terminal or may be used to transmit data between the in-vehicle terminal and the external device.
In addition, when the in-vehicle terminal is connected with the external cradle, the interface unit 170 may serve as a path through which power is supplied from the cradle to the in-vehicle terminal or may serve as a path through which various command signals input from the cradle are transmitted to the in-vehicle terminal. Various command signals or power input from the cradle may be used as signals for identifying whether the in-vehicle terminal is accurately mounted on the cradle. The output unit 150 is configured to provide output signals (e.g., audio signals, video signals, vibration signals, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output unit 152, and the like.
The display unit 151 may display information processed in the in-vehicle terminal. For example, the in-vehicle terminal may display a related User Interface (UI) or a Graphical User Interface (GUI). When the in-vehicle terminal is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like.
Meanwhile, when the display unit 151 and the touch pad are overlapped with each other in the form of a layer to form a touch screen, the display unit 151 may serve as an input device and an output device. The Display unit 151 may include at least one of a Liquid Crystal Display (LCD), a Thin Film Transistor LCD (TFT-LCD), an Organic Light-Emitting Diode (OLED) Display, a flexible Display, a three-dimensional (3D) Display, and the like. Some of these displays may be configured to be transparent to allow a user to see from the outside, which may be referred to as transparent displays, and a typical transparent display may be, for example, a Transparent Organic Light Emitting Diode (TOLED) display or the like. According to a particularly desired embodiment, the in-vehicle terminal may include two or more display units (or other display devices), for example, the in-vehicle terminal may include an external display unit (not shown) and an internal display unit (not shown). The touch screen may be used to detect a touch input pressure as well as a touch input position and a touch input area.
The audio output unit 152 may convert audio data received or stored in the storage unit 160 into an audio signal and output as sound when the in-vehicle terminal is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 152 may provide audio output related to a specific function performed by the in-vehicle terminal (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 152 may include a speaker, a buzzer, and the like.
The storage unit 160 may store software programs or the like for processing and controlling operations performed by the processing unit 180, or may temporarily store data (e.g., a phonebook, messages, still images, videos, and the like) that has been output or is to be output. Also, the storage unit 160 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The storage unit 160 may include at least one type of storage medium including a flash Memory, a hard disk, a multimedia card, a card-type Memory (e.g., SD or DX Memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. Also, the in-vehicle terminal may cooperate with a network storage device that performs a storage function of the storage unit 160 through network connection.
The processing unit 180 generally controls the overall operation of the in-vehicle terminal. For example, the processing unit 180 performs control and processing related to voice calls, data communications, video calls, and the like. As another example, the processing unit 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
The power supply unit 190 receives external power or internal power and provides appropriate power required to operate the elements and components under the control of the processing unit 180.
The various embodiments described herein may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, and an electronic unit designed to perform the functions described herein, and in some cases, such embodiments may be implemented in the Processing unit 180. For a software implementation, the implementation such as a procedure or a function may be implemented with separate software units allowing to perform at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory unit 160 and executed by the processing unit 180. A specific hardware entity of the storage unit 160 may be a memory, and a specific hardware entity of the processing unit 180 may be a controller.
Up to this point, the above-described unit composition structure represented by a vehicle-mounted terminal in a mobile terminal has been described in terms of its functions.
The mobile terminal 100 as shown in fig. 1 may be configured to operate with communication systems such as wired and wireless communication systems and satellite-based communication systems that transmit data via frames or packets.
A communication system in which the mobile terminal 100 according to an embodiment of the present invention is capable of operating will now be described with reference to fig. 2.
Such communication systems may use different air interfaces and/or physical layers. For example, the air interface used by the communication System includes, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)), global System for Mobile communications (GSM), and the like. By way of non-limiting example, the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
Referring to fig. 2, the CDMA wireless communication system may include a plurality of Mobile terminals 100, a plurality of Base Stations (BSs) 270, a Base Station Controller (BSC) 275, and a Mobile Switching Center (MSC) 280. The MSC280 is configured to interface with a Public Switched Telephone Network (PSTN) 290. The MSC280 is also configured to interface with a BSC275, which may be coupled to the BS270 via a backhaul. The backhaul may be constructed according to any of several known interfaces including, for example, E1/T1, ATM, IP, PPP, frame Relay, HDSL, ADSL, or xDSL. It will be understood that a system as shown in fig. 2 may include multiple BSCs 275.
Each BS270 may serve one or more sectors (or regions), each sector covered by a multi-directional antenna or an antenna pointing in a particular direction being radially distant from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS270 may be configured to support multiple frequency allocations, with each frequency allocation having a particular frequency spectrum (e.g., 1.25MHz, 5MHz, etc.).
The intersection of partitions with frequency allocations may be referred to as a CDMA channel. The BS270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology. In such a case, the term "base station" may be used to generically refer to a single BSC275 and at least one BS 270. The base stations may also be referred to as "cells". Alternatively, each partition of a particular BS270 may be referred to as a plurality of cell sites.
As shown in fig. 2, a Broadcast Transmitter (BT) 295 transmits a Broadcast signal to the mobile terminal 100 operating within the system. A broadcast receiving unit 111 as shown in fig. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295. In fig. 2, several satellites 300 are shown, for example, Global Positioning System (GPS) satellites 300 may be employed. The satellite 300 assists in locating at least one of the plurality of mobile terminals 100.
In fig. 2, a plurality of satellites 300 are depicted, but it is understood that useful positioning information may be obtained with any number of satellites. The location information unit 115 as shown in fig. 1 is generally configured to cooperate with the satellites 300 to obtain desired positioning information. Other techniques that can track the location of the mobile terminal may be used instead of or in addition to GPS tracking techniques. In addition, at least one GPS satellite 300 may selectively or additionally process satellite DMB transmission.
As a typical operation of the wireless communication system, the BS270 receives reverse link signals from various mobile terminals 100. The mobile terminal 100 is generally engaged in conversations, messaging, and other types of communications. Each reverse link signal received by a particular base station is processed within a particular BS 270. The obtained data is forwarded to the associated BSC 275. The BSC provides call resource allocation and mobility management functions including coordination of soft handoff procedures between BSs 270. The BSCs 275 also route the received data to the MSC280, which provides additional routing services for interfacing with the PSTN 290. Similarly, the PSTN290 interfaces with the MSC280, the MSC interfaces with the BSCs 275, and the BSCs 275 accordingly control the BS270 to transmit forward link signals to the mobile terminal 100.
The mobile communication unit 112 of the communication unit 110 in the mobile terminal accesses the mobile communication network based on the necessary data (including the user identification information and the authentication information) of the mobile communication network (such as the mobile communication network of 2G/3G/4G, etc.) built in the mobile terminal to transmit the mobile communication data (including the uplink mobile communication data and the downlink mobile communication data) for the services of web browsing, network multimedia playing, etc. of the mobile terminal user.
The wireless internet unit 113 of the communication unit 110 performs a function of a wireless hotspot by operating a related protocol function of the wireless hotspot, the wireless hotspot supports access by a plurality of mobile terminals (any mobile terminal other than the mobile terminal), transmits mobile communication data (including uplink mobile communication data and downlink mobile communication data) for mobile terminal user's web browsing, network multimedia playing and other services by multiplexing the mobile communication connection between the mobile communication unit 112 and the mobile communication network, since the mobile terminal essentially multiplexes the mobile communication connection between the mobile terminal and the communication network for transmitting mobile communication data, the traffic of mobile communication data consumed by the mobile terminal is charged to the communication tariff of the mobile terminal by a charging entity on the side of the communication network, thereby consuming the data traffic of the mobile communication data included in the communication tariff contracted for use by the mobile terminal.
A method for displaying flag data according to an embodiment of the present invention, as shown in fig. 3, includes: first data identifying the position in three-dimensional space of at least one sample point in the acquired area is acquired, and a three-dimensional virtual space (101) is established according to the first data. For example, the first data may be point cloud data, and a three-dimensional virtual space may be constructed from the point cloud data because a corresponding contour of a virtual space scene may be obtained from the point cloud data. Second data is acquired characterizing media information (102) in the acquired region that renders the target object in a two-dimensional space. For example, the second data may be street view image data, and is not limited to an image or a video image. And directly projecting the second data into the three-dimensional virtual space for displaying according to the extracted projection strategy to obtain a display result (103) containing third data. And distinguishing (104) the mark attribute information of the target object by using the corresponding color identification according to the third data. The third data may be color information that distinguishes the attribute of the marker. By adopting the embodiment of the invention, the point cloud data (only containing position coordinates) and the street view images (pictures or videos in four directions of front, back, left and right) are used as basic display data, and the two-dimensional street view images at specific positions can be mapped into the specified three-dimensional virtual space as required, so that real-time fusion of virtual and real is realized, the point cloud data of corresponding areas can be attached with color information, and the position and attribute information of markers such as signal lamps (straight running, left turning), signboards (speed limit) and the like can be directly identified based on the color information. The adopted data is still original point cloud data (not carrying color information), so that the data amount of processing cannot be additionally increased, and the processing efficiency is prevented from being influenced. Directly projecting the two-dimensional street view image into a three-dimensional virtual scene for displaying to obtain a multi-dimensional display result, namely: the method not only comprises the position information of a target object (identified through a point cloud map), but also comprises the step of attaching color information (color identification for distinguishing the attribute of the marker) to the point cloud data needing coloring on the point cloud map, so that subsequent editors do not need to complete the attribute identification of the marker on the point cloud map in a manual editing mode, and only need to directly adopt the multi-dimensional display result, namely, when constructing the road marker in the high-precision point cloud map, on one hand, the coordinates of the marker can be accurately positioned, on the other hand, the information contained by the marker can be conveniently identified, and the marker identification work can be rapidly and efficiently completed.
In the embodiment of the present invention, the processing logic formed by the steps 101-104 may be located in a PC terminal, and the processing logic is located in a specific implementation of the PC terminal, and the processing logic is edited in the PC terminal to obtain a map with a color identifier, which can be provided for a user side to use. In the editing process, the point cloud data can identify the position of at least one sampling point in the acquired area in the three-dimensional space, so that the corresponding outline of the virtual space scene can be obtained according to the point cloud data, and the three-dimensional virtual space can be constructed according to the point cloud data. And in the process of combining the point cloud data and the street view image data, directly projecting the street view image data into a three-dimensional virtual space constructed by the point cloud data according to the extracted projection strategy for displaying, thereby obtaining an editing processing result containing the requirement. In the editing processing result, the target object is distinguished by corresponding color identification on the mark attribute information, the original map data is optimized, and a user using the vehicle-mounted system can perform navigation assistance through the optimized map data distinguished by the color identification.
The processing logic formed by the steps 101-104 may also be located at the vehicle-mounted terminal or the mobile phone terminal, that is, the vehicle-mounted terminal or the mobile phone terminal executes the processing logic, which is not described in detail.
Of course, the in-vehicle terminal or the mobile phone terminal may display a display result obtained by executing the processing logic by the PC terminal. Firstly, data is acquired through a vehicle-mounted terminal or a mobile phone terminal and then is provided for a PC terminal to be edited, and then an editing processing result is displayed to an editing manager positioned at the PC terminal to be edited, or the editing processing result is displayed to a user side positioned at the vehicle-mounted terminal to be displayed (for example, the vehicle-mounted terminal with a navigation function) or the editing processing result is displayed to the user side of the mobile phone terminal (for example, a user using a mobile phone application to navigate).
In an embodiment of the present invention, landmarks such as road signs, guideboards, signal lamps, etc. having obvious geometric features may be initially located in point cloud data in a feature matching manner.
In one embodiment of the present invention, the traffic markings can be distinguished by the difference in point cloud reflectivity from the surrounding ground.
In one practical application, a point cloud map and a street view image formed by point cloud data are projected as reference data. The point cloud map is generated based on point cloud data acquired by a laser scanner installed on the mobile measuring vehicle, and can well restore the detailed outline of a complex scene. The street view image may include an image format or a video format, and the two-dimensional street view image at a specific position may be directly mapped into a specified three-dimensional space as required, that is, the two-dimensional street view image is projected into a scene of a three-dimensional virtual space calibrated by using a point cloud map, for example, a pair of images is projected into a three-dimensional virtual scene, so that the real-view image is displayed in the virtual scene, and a processing result of real-time fusion of virtual and real images is achieved. Specifically, the projection strategy may be a way of projecting texture mapping. The texture mapping is a way of simulating projector projection images, i.e. a pair of images is projected into a virtual three-dimensional space, and the basic principle is the same as shadow mapping (simulating shadow effect in real scene in virtual environment, which can be realized by constructing a depth map of the whole scene at the light source position).
In an actual application, the live-action image is mapped to the virtual scene in a projection texture mapping mode, so that the original point cloud data can be attached with color information, the point cloud data has better identification degree, and editing and detection in the later period are facilitated for editors. The multi-dimensional display result is obtained through the embodiment of the invention, namely, the point cloud data corresponding to the acquisition area is attached with the color information, and an editor who edits and detects the data in the later period can directly mark the position of the marker and also can directly mark the attribute information of the marker, such as a signal lamp (straight running, left turning), a guideboard (speed limit) and the like, based on the color information. Because the attributes of the road signs (such as speed limit and height limit of the guideboard, types of traffic lights and the like) can be identified according to the color information, the attributes of the road signs can be quickly identified, so that a reference basis is provided for accurate positioning of vehicles and identification of road conditions, and the automatic driving vehicle can accurately judge the surrounding environment and make a proper driving strategy.
As shown in fig. 4, a method for displaying logo data according to an embodiment of the present invention includes: first data are acquired, the first data are used for identifying the position of at least one sampling point in the acquired area in a three-dimensional space, and data re-partitioning preprocessing operation (201) is carried out on the first data so as to establish a three-dimensional virtual space (202) according to the processed data. For example, in the preprocessing, the entire acquisition region where the first data is acquired is denoted as R1. Dividing the acquisition region according to the designated region division parameters to obtain at least two first target regions, and dividing r into two regions11、……、r1nIdentifying the at least two first target regions, the n being a positive integer greater than 1, the size of each first target region being the same. Enlarging the boundary of the at least two first target areas according to the area boundary enhancement parameters to obtain at least two second target areas, and taking r as the number21、……、r2nIdentifying the at least two second target regions, the n being a positive integer greater than 1, the size of each second target region also being the same, and each second target region being larger than each first target region. Because the normal estimation of a current sampling point is carried out by taking the relative neighborhood points around the current sampling point as references, one or more sides of the sampling point are usually lacked in the normal estimation of the sampling point of the edge positionTherefore, in the embodiment of the present invention, the original boundary of the first target region needs to be expanded to enlarge the edge, so that the missing one or some relative neighboring points can be compensated and then calculated to obtain the second target region, and the normal estimation of the sampling point at the edge position is more accurate by calculating in the second target region. Subsequently, the redundant information of the completion needs to be removed. And denoising and normal estimation are carried out on each sampling point in the at least two second target regions by taking k neighborhood points as references to obtain normal information, and the normal information is added into the first data. And when the first data is point cloud data, preprocessing the initial point cloud data to obtain point cloud data containing normal information. Second data characterizing media information (203) resulting from rendering a target object in a two-dimensional space in the acquired region is acquired. For example, the second data may be street view image data, and is not limited to an image or a video image. And directly projecting the second data into the three-dimensional virtual space for displaying according to the extracted projection strategy to obtain a display result (204) containing third data. And distinguishing the mark attribute information of the target object by the corresponding color identification according to the third data (205). The third data may be color information that distinguishes the attribute of the marker. By adopting the embodiment of the invention, the point cloud data (only containing position coordinates) and the street view images (pictures or videos in four directions, namely front, back, left and right) are used as basic display data, and the fact that in practical application, the original point cloud data are collected according to a driving route and are divided according to time periods is considered, so that the original point cloud data have the conditions of different sizes, uneven distribution and serious overlapping and are not beneficial to subsequent normal estimation and data display. The original point cloud data needs to be preprocessed, that is, the original point cloud data is divided again in the dividing stage, for example, each block finally divided is 256 × 256 square meters, but when the accuracy of subsequent point cloud noise elimination and normal estimation is considered, the divided area needs to be enlarged to 261 × 261 square meters, so that the calculation accuracy of the edge data can be improved. Finally, as requiredThe two-dimensional street view image at a specific position is mapped into a specified three-dimensional virtual space, so that real-time fusion of virtual and real is realized, color information is attached to point cloud data of a corresponding area, and the position and attribute information of markers such as signal lamps (straight running, left turning), signboards (speed limit) and the like can be directly identified based on the color information. The adopted data is still original point cloud data (not carrying color information), so that the data amount of processing cannot be additionally increased, and the processing efficiency is prevented from being influenced. Directly projecting the two-dimensional street view image into a three-dimensional virtual scene for displaying to obtain a multi-dimensional display result, namely: the method not only comprises the position information of a target object (identified through a point cloud map), but also comprises the step of attaching color information (color identification for distinguishing the attribute of the marker) to the point cloud data needing coloring on the point cloud map, so that subsequent editors do not need to complete the attribute identification of the marker on the point cloud map in a manual editing mode, and only need to directly adopt the multi-dimensional display result, namely, when constructing the road marker in the high-precision point cloud map, on one hand, the coordinates of the marker can be accurately positioned, on the other hand, the information contained by the marker can be conveniently identified, and the marker identification work can be rapidly and efficiently completed.
In the display method of the sign data, first data is collected, the first data is used for identifying the position of at least one sampling point in a collected area in a three-dimensional space, and data re-blocking preprocessing operation is performed on the first data, so that a three-dimensional virtual space is established according to the processed data. For example, in the preprocessing, the entire acquisition region where the first data is acquired is denoted as R1. Dividing the acquisition region according to the designated region division parameters to obtain at least two first target regions, and dividing r into two regions11、……、r1nIdentifying the at least two first target regions, the n being a positive integer greater than 1, the size of each first target region being the same. Enlarging the boundary of the at least two first target areas according to the area boundary enhancement parameters to obtain at least two second target areas, and taking r as the number21、……、r2nIdentifying the at least two second target regions, the n being a positive integer greater than 1, the size of each second target region also being the same, and each second target region being larger than each first target region. Because the normal estimation of a current sampling point is carried out by taking the relative neighborhood points at the periphery of the current sampling point as reference, and in the normal estimation of the sampling point at the edge position, one or more relative neighborhood points at one or more sides are usually lacked, in the embodiment of the invention, the original first target area boundary needs to be enlarged, so that after the edge is enlarged, the lacked one or more relative neighborhood points can be compensated and then calculated, and then a second target area is obtained and calculated in the second target area, so that the normal estimation of the sampling point at the edge position is more accurate.
In the data re-blocking preprocessing operation on the first data, after the edges are expanded to fill up the required neighborhood data for accurate normal estimation, the filled redundant information needs to be removed, so that after the redundant area is removed, the original actual area is returned to for a subsequent series of operations. Specifically, the at least two second target regions are obtained after the edges are expanded, each sampling point in the at least two second target regions is denoised and normal estimated by taking k neighborhood points as references to obtain normal information, and the normal information is added into the first data. After denoising and normal estimation are carried out, when each of the at least two second target areas is processed, the size parameter of the first target area is obtained, and each second target area is cut according to the size parameter of the first target area, so that redundant areas beyond the size range of the first target area are deleted, and the purpose is to reserve the first target area. In practical application, the redundant area is removed, that is, the part of the second target area beyond the first target area is directly deleted. Because the actual area edge is enlarged for the consideration of the normal estimation accuracy of the edge, the missing one or some relative neighborhood points are supplemented and then the normal estimation is carried out, so that the normal estimation of the sampling point at the edge position is more accurate. However, on one hand, these data are redundant information, and on the other hand, similar calculations are performed for a plurality of divided areas, which may result in repeated calculations, so that, in any aspect, these data need to be removed, that is, after normal estimation is completed, the redundant areas need to be removed, and thus, the process of preprocessing the point cloud data is completed.
By adopting the embodiment of the invention, when the first data is point cloud data, the point cloud data containing normal information is obtained after the initial point cloud data is preprocessed. Second data characterizing media information resulting from rendering a target object in a two-dimensional space in the captured region is obtained. For example, the second data may be street view image data, and is not limited to an image or a video image. And directly projecting the second data into the three-dimensional virtual space for displaying according to the extracted projection strategy to obtain a display result containing third data. And distinguishing the mark attribute information of the target object by using the corresponding color identifier according to the third data. The third data may be color information that distinguishes the attribute of the marker. By adopting the embodiment of the invention, the point cloud data (only containing position coordinates) and the street view images (pictures or videos in four directions, namely front, back, left and right) are used as basic display data, and the fact that in practical application, the original point cloud data are collected according to a driving route and are divided according to time periods is considered, so that the original point cloud data have the conditions of different sizes, uneven distribution and serious overlapping and are not beneficial to subsequent normal estimation and data display. The original point cloud data needs to be preprocessed, that is, the original point cloud data is divided again in the dividing stage, for example, each block finally divided is 256 × 256 square meters, but when the accuracy of subsequent point cloud noise elimination and normal estimation is considered, the divided area needs to be enlarged to 261 × 261 square meters, so that the calculation accuracy of the edge data can be improved. Due to the fact that the accuracy of edge normal estimation is considered, the actual area edge is enlarged, so that one or more relative neighborhood points which are lacked are supplemented and then subjected to normal estimation, the data are redundant information of supplementation, the data need to be removed after the normal estimation is finished, namely, the part, exceeding the first target area, in the second target area needs to be directly deleted after the normal estimation is finished. Finally, the two-dimensional street view image at the specific position is mapped into the designated three-dimensional virtual space as required, so that real-time fusion of virtual and real images is realized, color information can be attached to the point cloud data of the corresponding area, and the position and attribute information of markers such as signal lamps (straight running, left turning), signboards (speed limit) and the like can be directly identified based on the color information. The adopted data is still original point cloud data (not carrying color information), so that the data amount of processing cannot be additionally increased, and the processing efficiency is prevented from being influenced. Directly projecting the two-dimensional street view image into a three-dimensional virtual scene for displaying to obtain a multi-dimensional display result, namely: the method not only comprises the position information of a target object (identified through a point cloud map), but also comprises the step of attaching color information (color identification for distinguishing the attribute of the marker) to the point cloud data needing coloring on the point cloud map, so that subsequent editors do not need to complete the attribute identification of the marker on the point cloud map in a manual editing mode, and only need to directly adopt the multi-dimensional display result, namely, when constructing the road marker in the high-precision point cloud map, on one hand, the coordinates of the marker can be accurately positioned, on the other hand, the information contained by the marker can be conveniently identified, and the marker identification work can be rapidly and efficiently completed.
In the embodiment of the invention, the point cloud data can be encrypted. Specifically, point cloud data including the normal information is acquired, and the point cloud data including the normal information may be referred to as basic data. After the point cloud data is subjected to data encryption processing according to the normal information, additional data containing supplementary information can be obtained, and the additional data containing supplementary information can also be called supplementary data. Because the projection gaps caused by the discreteness of the point cloud data are the largest, the subsequent projection texture mapping effect is influenced by the gaps in practical use. Therefore, the process of encrypting the point cloud data is to fill the gaps. In practical application, in order to improve the calculation speed, the normal information of all point cloud data to be drawn is transmitted into the graphic processor, the conversion of the normal information from a model coordinate system to a camera coordinate system is completed, and if a processing result obtained by the conversion approximately points to a camera, it is judged that the current point cloud data needs to be encrypted. The specific implementation of encryption is: the method is realized by uniformly scattering points in a specified range around the position of the current point cloud data, wherein the uniformly scattering points are supplement information mentioned in the embodiment of the invention and are also called supplement data different from the point cloud basic data.
In the embodiment of the invention, the point cloud data can be subjected to deep test processing. Specifically, point cloud data including the normal information is acquired, and the point cloud data including the normal information may be referred to as basic data. After the point cloud data is subjected to data encryption processing according to the normal information, additional data containing supplementary information can be obtained, and the additional data containing supplementary information can also be called supplementary data. And taking a data set formed by the basic data and the supplementary data as data to be drawn, carrying out depth test on the data to be drawn in the process of drawing according to projection parameters (such as projector parameters), drawing to obtain each pixel point in the image, and recording the depth value of the current mapping area. The depth test of the embodiment of the invention is adopted to generate a depth map which provides reference data for projection texture mapping and reduce the possibility of error mapping. The input data of the process is basic point cloud data and encrypted additional point cloud data, and the data processing process comprises the following steps: and drawing the input data at the position of the projector for projecting texture mapping according to the parameters of the projector so as to obtain an image with the same aspect ratio as the image for projecting mapping, wherein each pixel point in the image records the depth value of the current mapping area.
In the embodiment of the invention, the point cloud data can be subjected to projection texture mapping. Specifically, point cloud data including the normal information is acquired, and the point cloud data including the normal information may be referred to as basic data. After the point cloud data is subjected to data encryption processing according to the normal information, additional data containing supplementary information can be obtained, and the additional data containing supplementary information can also be called supplementary data. And taking a data set formed by the basic data and the supplementary data as data to be drawn, and mapping the data to be drawn according to a projection texture mapping strategy according to the projection texture mapping strategy so as to directly project the two-dimensional street view image data into a three-dimensional virtual space for display. In the mapping process, firstly, whether coordinate system conversion (including conversion from a model coordinate system to a camera coordinate system and conversion from the model coordinate system to a projector coordinate system) is required at present is judged. When the coordinate system conversion is needed, the coordinate system conversion is firstly performed, the data to be drawn is drawn after the coordinate system conversion is successful, and the texture coordinates corresponding to each coloring point in the data to be drawn are determined in real time, for example, the texture coordinates corresponding to the current point cloud data in the data to be drawn are obtained by calculating the coordinate positions of the current point cloud data in the street view image and the depth map. Judging according to the texture coordinates corresponding to each coloring point to determine whether the points needing coloring in the data to be drawn are in a projection area, if so, determining the color values corresponding to the points needing coloring in the data to be drawn according to the comparison result obtained by comparing the first depth value of the points needing coloring in the data to be drawn in a projector coordinate system with the second depth value in a depth map, and performing color adding processing on the points needing coloring in the data to be drawn according to the texture coordinates by using the color values to obtain color information for distinguishing the attributes of the markers. The point needing coloring in the data to be drawn is colored, and the color identifier obtained through coloring can distinguish the mark attribute information of the target object in the real two-dimensional environment by the corresponding color identifier, so that the attribute of the road mark is easy to distinguish, for example, the attribute information of a signal lamp (straight running, left turning), a guideboard (speed limit) and the like can be identified through the color identifier. Wherein the depth map refers to: and according to projection parameters (such as projector parameters), recording the depth value of the current mapping area at each pixel point in the depth map of the image obtained by drawing when performing depth test in the process of drawing the data to be drawn. By adopting the embodiment of the invention, the convenience of displaying the road marker information, the complexity of constructing the color point cloud data and the like are fully considered, the two-dimensional street view image is directly mapped into the three-dimensional virtual space by depending on the projection texture mapping technology, and the color information is attached to the point cloud data without increasing the point cloud data volume, thereby facilitating manual identification. By adjusting the projection image in real time, the complicated process of camera calibration in advance is avoided, and a plurality of images can be mapped in the same region, so that a better identification effect is achieved. Moreover, because the point cloud data has correct depth information, when the texture image is mapped to the point cloud image, the observation angle is switched, the image stretching condition does not occur, and the marker information is favorably calibrated manually.
As shown in fig. 5, a terminal 41 according to an embodiment of the present invention includes: an acquisition unit 411 configured to acquire first data, where the first data is used to identify a position of at least one sampling point in a three-dimensional space in an acquired area; a space modeling unit 412, configured to establish a three-dimensional virtual space according to the first data; an obtaining unit 413, configured to obtain second data, where the second data is used to represent media information obtained by presenting a target object in a two-dimensional space in a collected region; the projection unit 414 is configured to directly project the second data into the three-dimensional virtual space for display according to a projection policy, so as to obtain a display result including third data; an identifying unit 415, configured to distinguish the target object with corresponding color identification according to the third data.
In the embodiment of the invention, first data is acquired, the first data is used for identifying the position of at least one sampling point in the acquired area in a three-dimensional space, and a three-dimensional virtual space is established according to the first data. For example, the first data may be point cloud data, and a three-dimensional virtual space may be constructed from the point cloud data because a corresponding contour of a virtual space scene may be obtained from the point cloud data. Second data characterizing media information resulting from rendering a target object in a two-dimensional space in the captured region is obtained. For example, the second data may be street view image data, and is not limited to an image or a video image. And directly projecting the second data into the three-dimensional virtual space for displaying according to the extracted projection strategy to obtain a display result containing third data. And distinguishing the mark attribute information of the target object by using the corresponding color identifier according to the third data. The third data may be color information that distinguishes the attribute of the marker. By adopting the embodiment of the invention, the point cloud data (only containing position coordinates) and the street view images (pictures or videos in four directions of front, back, left and right) are used as basic display data, and the two-dimensional street view images at specific positions can be mapped into the specified three-dimensional virtual space as required, so that real-time fusion of virtual and real is realized, the point cloud data of corresponding areas can be attached with color information, and the position and attribute information of markers such as signal lamps (straight running, left turning), signboards (speed limit) and the like can be directly identified based on the color information. The adopted data is still original point cloud data (not carrying color information), so that the data amount of processing cannot be additionally increased, and the processing efficiency is prevented from being influenced. Directly projecting the two-dimensional street view image into a three-dimensional virtual scene for displaying to obtain a multi-dimensional display result, namely: the method not only comprises the position information of a target object (identified through a point cloud map), but also comprises the step of attaching color information (color identification for distinguishing the attribute of the marker) to the point cloud data needing coloring on the point cloud map, so that subsequent editors do not need to complete the attribute identification of the marker on the point cloud map in a manual editing mode, and only need to directly adopt the multi-dimensional display result, namely, when constructing the road marker in the high-precision point cloud map, on one hand, the coordinates of the marker can be accurately positioned, on the other hand, the information contained by the marker can be conveniently identified, and the marker identification work can be rapidly and efficiently completed.
In an implementation manner of the embodiment of the present invention, the terminal further includes: and the preprocessing unit is used for carrying out data re-blocking preprocessing operation on the first data. Specifically, the preprocessing unit is configured to: the method comprises the steps of obtaining an acquisition area where first data are located, dividing the acquisition area according to specified area division parameters to obtain at least two first target areas, expanding the boundaries of the at least two first target areas according to area boundary enhancement parameters to obtain at least two second target areas, carrying out denoising and normal estimation on each sampling point in the at least two second target areas by taking k neighborhood points as references to obtain normal information, and adding the normal information into the first data.
In an implementation manner of the embodiment of the present invention, the preprocessing unit is further configured to: and after denoising and normal estimation are carried out, when each of the at least two second target areas is processed, the size parameter of the first target area is obtained, and each second target area is cut according to the size parameter of the first target area so as to delete redundant areas beyond the size range of the first target area.
In an implementation manner of the embodiment of the present invention, the terminal further includes: an encryption unit to: and acquiring first data containing the normal information, and carrying out data encryption processing on the first data according to the normal information to obtain fourth data containing supplementary information.
In an implementation manner of the embodiment of the present invention, the terminal further includes: a depth test unit to: acquiring first data containing the normal information, acquiring fourth data containing supplementary information, taking a data set formed by the first data and the fourth data as data to be drawn, carrying out depth test on the data to be drawn according to projection parameters in the process of drawing, drawing to obtain each pixel point in an image, and recording the depth value of a current mapping area.
In an embodiment of the present invention, the projection unit is further configured to: the method comprises the steps of obtaining first data containing normal information, obtaining fourth data containing supplementary information, taking a data set formed by the first data and the fourth data as data to be drawn, extracting a projection texture mapping strategy, judging whether coordinate system conversion is needed or not when the data to be drawn is mapped according to the projection texture mapping strategy, if the coordinate system conversion is needed, firstly converting the coordinate system, drawing the data to be drawn after the coordinate system conversion is successful, determining texture coordinates corresponding to each coloring point in the data to be drawn in real time, and adding colors to the points needing coloring in the data to be drawn according to the texture coordinates to obtain the third data.
In an embodiment of the present invention, the projection unit is further configured to: judging according to the texture coordinates corresponding to each coloring point to determine whether the points needing coloring in the data to be drawn are in a projection area, if so, determining the color values corresponding to the points needing coloring in the data to be drawn according to the comparison result obtained by comparing the first depth values of the points needing coloring in the data to be drawn in the projector coordinate system with the second depth values in the depth map, and performing the color adding processing on the points needing coloring in the data to be drawn according to the color values. The depth map is an image obtained by drawing when depth test is carried out in the process of drawing the data to be drawn according to the projection parameters.
As for the processor for data Processing, when executing Processing, the processor can be implemented by a microprocessor, a Central Processing Unit (CPU), a DSP or an FPGA; for the storage medium, the storage medium contains operation instructions, which may be computer executable codes, and the operation instructions implement the steps in the flow of the information processing method according to the above-described embodiment of the present invention.
Here, it should be noted that: the above description related to the terminal and the server items is similar to the above description of the method, and the description of the beneficial effects of the same method is omitted for brevity. For technical details not disclosed in the embodiments of the terminal and the server of the present invention, please refer to the description of the embodiments of the method flow of the present invention.
The embodiment of the invention is explained by taking a practical application scene as an example as follows:
in the identification scene of the road marker, the preliminary identification can be completed by combining geometric feature extraction and pattern matching. However, since the algorithm itself is not robust and needs to be edited and detected manually to ensure the accuracy of the data, a data presentation scheme convenient for manual processing is required. In the prior art, one scheme is to directly use a color point cloud as reference data, and attach color information to an original point cloud, so that an editor can perform accurate positioning of a marker through the point cloud, and can also directly identify attribute information of the marker (such as speed limit, height limit, traffic light type and the like of a guideboard). The effect of the colored point cloud is shown in fig. 6, as one example identified by the trees in the a11 area. The other scheme is to switch two-dimensional and three-dimensional scenes, match the street view data with the point cloud data in a coordinate mode, and switch the three-dimensional scenes to two-dimensional scenes after coordinate positioning is finished to confirm the information of corresponding markers. Or a camera calibration mode is adopted to identify the marker in the two-dimensional image and then the marker is mapped into the three-dimensional space, so that marker identification is completed, the mapping effect is as that areas where A12-A15 and the like are located in the image in FIG. 7 are identified, and the effect that the point cloud data is matched with the street view data is obtained. Yet another solution is the commonly used method of camera calibration. Although the three schemes can achieve the marking effect, certain problems exist. 1) Although the color point cloud can overcome the problem of information display, so that the attribute information of the marker can be directly obtained from the point cloud, in order to achieve the effect, each point cloud data carries coordinate information (xyz), and color information (rgb) is also required to be additionally added, so that the data volume of the point cloud is obviously increased. Moreover, in order to generate a color point cloud with relatively good effect, the registration of the image and the point cloud is performed for each scene, which requires a lot of manpower and time. 2) The two-dimensional scene switching method is relatively complex to process when data are edited, and information (position information and attribute information) of the same marker cannot be displayed in one window, so that the method is not beneficial to editing. 3) The camera calibration method, such as the color point cloud scheme, also requires scene registration, and is susceptible to interference of point cloud noise and influence of sampling precision when mapping from a two-dimensional space to a three-dimensional space, resulting in mapping position errors.
Road signs such as road signs, guideboards, signal lamps and traffic markings are indispensable parts of high-precision maps, and reference basis is provided for accurate positioning of vehicles and identification of road conditions. Because the original point cloud in the map cannot carry color information and is interfered during collection, the identification of the road data still needs to be completed in a manual editing mode, the scheme in the prior art has respective defects and cannot solve the problem, the identification is not clear, and a large amount of manpower and time cost are needed for manual identification during later-stage editing. Corresponding to the above scene, the embodiment of the invention is a road sign display scheme based on virtual-real fusion, and the point cloud map and the street view image are used as reference data, and the street view image and the point cloud data are fused in real time in a projection texture mapping mode, for example, areas such as a16-a18 in fig. 8 are identified, so that the fusion effect of the point cloud map and the street view image is obtained, the original point cloud can be attached with color information, and the attribute identification work of the road sign can be conveniently completed manually. Due to the fact that the disadvantages of displaying road marker information in multiple windows, the difficulty in constructing color point clouds and the problems caused by huge data volume of the color point clouds are fully considered, the color information of the panoramic image can be directly mapped into a three-dimensional scene by the aid of the projection texture technology, the effect of displaying marker attribute information in a three-dimensional space is achieved without increasing the data volume of the point clouds, and editors can conveniently perform identification work.
The display effect obtained based on the virtual-real fusion road sign display scheme can be provided for editors who edit and detect in the later period through a data display platform, so that the editors can accurately position the coordinates of the signs on the one hand and conveniently identify the information contained in the signs on the other hand when the editors construct the road signs in the high-precision map by using the platform, and the sign identification work can be quickly and efficiently completed. The display platform uses point cloud data (only containing position coordinates) and street view images (four directions, namely, a front direction, a back direction, a left direction and a right direction, pictures or videos) as basic display data, and editors can map the street view images at specific positions into a specified space according to needs, so that color information is attached to the point clouds in corresponding areas, and the editors can directly identify the positions and attribute information of markers such as signal lamps (straight lines, left turns), signboards (speed limits) and the like on the basis of the color information.
The application scene adopts the projection texture technology of the embodiment of the invention to directly map the street view image into the three-dimensional scene, and finally, in the process of displaying the real view image in the virtual scene in a virtual-real fusion mode, the point cloud data needs to be preprocessed firstly, and then the road sign object is displayed secondly.
Point cloud data preprocessing
Because the point cloud data is acquired by the vehicle-mounted laser, the point cloud data is easily influenced by external environment and the factors of the acquired vehicle, so that the point cloud data is unevenly distributed, and due to the fact that street lamps, guideboards and the like are relatively small, data loss (data are not acquired) is easily caused on the final point cloud data, so that the effect of projection textures is influenced (no object can receive projection in the lost area), data preprocessing is needed, and support is provided for subsequent point cloud encryption. The main task of point cloud data preprocessing is to re-partition the original data while estimating the normal coordinates of each point cloud.
The point cloud data preprocessing process is shown in fig. 9 and includes:
step 401, obtaining original point cloud data.
And 402, re-segmenting the point cloud data.
Step 403, noise elimination, normal estimation, completing segmentation and obtaining point cloud data with normal information.
The original point cloud data is collected according to a driving route and is divided according to time periods, so that the original data can have the conditions of different sizes, uneven distribution and serious overlapping. And the normal estimation and data display are not facilitated subsequently. In the preprocessing process, the point cloud data segmentation stage will re-partition the original point cloud data, where fig. 10 is a schematic diagram of point cloud data block segmentation, and after re-partitioning, the size of each finally-partitioned block is 256 × 256 square meters, one example is shown by the area border identified by a21 in fig. 10. However, in consideration of the accuracy of the subsequent point cloud noise elimination and normal estimation, the divided area needs to be enlarged to 261 × 261 square meters, one example is shown by the area border marked by a22 in fig. 10. As can be seen in fig. 10, the area bounding box identified by a22 is larger than the area bounding box identified by a 21.
In the preprocessing process, point cloud denoising and normal estimation are carried out by adopting a k neighborhood method. In the denoising process, the average value of the cloud distances between each point and the k nearest neighboring points around is calculated, and when the average value is larger than a specified threshold value, the point is removed. When normal estimation is carried out, each point in the point cloud is sequentially searched for a neighborhood point set, and Principal Component Analysis (PCA) is carried out on the neighborhood point set to obtain the normal direction of the point cloud.
After the subsequent calculation is completed, the redundant area is removed, namely the area border identified by a21 in fig. 10 is reserved, and the excess area part of the area border identified by a22, which is larger than the area border identified by a21, is directly deleted. Thus, the point cloud data preprocessing process is completed.
Second, road sign display
The road sign display is realized by real-time virtual-real fusion technology instead of data preprocessing, and real-time images are mapped into virtual scenes through projection textures, so that point cloud data have better identification degree. However, there are two problems to be solved by projecting point cloud data directly: 1. the point cloud data is discrete, so when the camera is drawn to a certain position, the point cloud looks relatively sparse, and the projection effect is influenced; 2. limited by projection technology, direct projection, and the image is transparent, i.e. not shielded by the object in front. To address both of these issues, a pavement marker display, as shown in fig. 11, includes: a point cloud encryption process, a depth test process and a projection texture mapping process.
Fig. 12 is a complete flow chart of a marker display combining the above three processes, including:
step 501, point cloud data to be drawn is obtained and used as basic data.
And 502, encrypting according to the normal method to obtain additional data.
And step 503, generating a depth map under the view angle of the projector according to the basic data and the extra data.
Step 504, projection texture mapping is performed according to the basic data and the extra data.
2.1, the process of encrypting the point cloud data shown in fig. 11 is described as follows:
the point cloud data encryption process is directly carried out during drawing, and data preprocessing is not carried out in advance. In consideration of the characteristics of the point cloud data during actual projection, in the point cloud projection diagram shown in fig. 13, the position identified by a31 is the point cloud data, and the position identified by a32 is the projection line. As can be seen from fig. 13, when the point cloud data is directly facing the camera, the projection gaps caused by the discreteness of the point cloud itself are the largest, and in actual use, the subsequent projection texture mapping effect is also affected by the gaps. Therefore, the process of point cloud encryption is to fill these gaps. In the data preprocessing stage, each point cloud has completed its Normal estimation NormalpointcludIn order to increase the calculation speed, the Normal direction of all point cloud data to be drawn is transmitted into an image processing unit (GPU), and the Normal direction data is converted from a model coordinate system to a camera coordinate system in a Vertex shader (Vertex shader) to obtain Normal datapointclud-camera. If Normalpointclud-cameraIf the point cloud is approximately pointed at the camera, the point cloud needs to be encrypted. Wherein, the Normal ispointcludRepresenting the estimated point cloud normal information (in a model coordinate system); the Normalpointclud-cameraRepresenting the normal information of the point cloud converted into the camera coordinate system.
The encryption process is carried out by the way that the current point cloud position is within the specified range around and in the Normal rangepointcludThe method is realized by uniformly scattering points in a vertical plane, the whole process is also completed in a Vertex Shader, and then the data is acquired through a transform feedback (transform feedback) technology, so that the data can be acquired from a GPU.
2.2, the depth test procedure shown in FIG. 11 is described as follows:
the purpose of the depth test stage is to generate a depth map that provides reference data for the projected texture map, reducing the likelihood of a false map. The input data of the process is basic point cloud data and encrypted additional point cloud data, the data processing process is that the input data is drawn according to the parameters of a projector at the position of the projector for projection texture mapping, and an image Depth with the same aspect ratio as the image for projection mapping is obtainedpointcloud-projectAnd recording the depth value of the current mapping area by each pixel point in the image. Wherein, the Depthpointcloud-projectRepresenting a point cloud depth map drawn under the visual angle of a projector.
2.3, the projection texture map shown in FIG. 11 is described as follows:
after data encryption and depth calculation are completed, projection texture mapping can be performed. The projection texture mapping technique is a process of determining texture coordinates of each colored point in real time during a rendering process.
The process of projection texture mapping, as shown in fig. 14, includes:
step 601, point cloud data to be drawn are obtained.
Step 602, coordinate data conversion, including: and converting coordinate data under a projector coordinate system and a camera coordinate system.
Step 603, calculating the coordinates of the current point in the street view image and the depth map.
Step 604, judging whether the texture coordinate belongs to [0,1], if so, executing step 605; otherwise, step 608 is performed.
Step 605, calculate the current point depth value.
Step 606, judging whether the current depth is less than or equal to the depth of the depth map, if so, executing the step; otherwise, step 608 is performed.
Step 607, the RGB values in the street view image are taken, and step 609 is executed.
And step 608, taking the cloud basic color value.
And step 609, drawing point cloud data.
When projection texture mapping is performed, the Vertex Shader processes the coordinate conversion of the normal point cloud position data (i.e., converting from the model coordinate system to the camera coordinate system), and also needs to convert the model coordinate system to the projector coordinate system, which aims to calculate the position of the current point cloud in the street view image and the depth map (i.e., corresponding texture coordinates). After the step of coordinate transformation is completed, the point cloud data needing to be colored can be colored in the Fragment Shader. Judging according to the calculated texture coordinates in the coloring process to determine whether the texture coordinates are in a projection range; then comparing the depth value of the point cloud image in the projector coordinate system with the depth value in the depth map to determine whether the point cloud image is shielded by other point cloud data; and finally, determining the real color value of the current point cloud data according to the comparison result so as to color and finish the whole projection texture mapping process. By adopting the embodiment, the convenience of displaying the road marker information and the complexity of constructing the color point cloud data are fully considered, the two-dimensional image is directly mapped into the three-dimensional space by means of the projection texture mapping technology, the color information is attached to the point cloud while the point cloud data volume is not increased, and manual identification is facilitated. The method can adjust the projection image in real time, and avoids the complicated process of calibrating a camera in advance; and a plurality of images can be mapped in the same area, so that a better identification effect is achieved. Moreover, because the point cloud has correct depth information, when the texture image is mapped to the point cloud image, the observation angle is switched, the image stretching condition does not occur, and the marker information is favorably calibrated manually.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (16)

1. A method for displaying logo data, the method comprising:
acquiring first data, wherein the first data is used for identifying the position of at least one sampling point in the acquired area in a three-dimensional space;
establishing a three-dimensional virtual space according to the first data;
acquiring second data, wherein the second data is used for representing media information obtained by presenting a target object in a two-dimensional space in an acquired area;
projecting the second data into the three-dimensional virtual space for display according to the extracted projection strategy to obtain a display result containing third data; the third data is color information for distinguishing the attribute of the marker, and the display result comprises position information of the target object and first data attached with the color information;
and distinguishing the mark attribute information of the target object by using the corresponding color identifier according to the third data.
2. The method of claim 1, further comprising:
and after the first data are collected, carrying out data re-blocking pretreatment operation on the first data.
3. The method of claim 2, wherein performing a pre-processing operation of data re-blocking on the first data comprises:
acquiring the acquisition area where the first data is located;
dividing the acquisition region according to the designated region division parameters to obtain at least two first target regions;
expanding the boundaries of the at least two first target areas according to the area boundary enhancement parameters to obtain at least two second target areas;
and denoising and normal estimation are carried out on each sampling point in the at least two second target regions by taking k neighborhood points as references to obtain normal information, and the normal information is added into the first data.
4. The method of claim 3, wherein performing a pre-processing operation of data re-blocking on the first data further comprises:
after the denoising and the normal estimation are carried out, when each second target area of the at least two second target areas is processed, the size parameter of the first target area is obtained;
and cutting each second target area according to the size parameter of the first target area so as to delete the redundant area beyond the size range of the first target area.
5. The method according to claim 3 or 4, characterized in that the method further comprises:
acquiring first data containing the normal information;
and carrying out data encryption processing on the first data according to the normal information to obtain fourth data containing supplementary information.
6. The method of claim 5, further comprising:
acquiring first data containing the normal information;
acquiring fourth data containing the augmentation information;
and taking a data set formed by the first data and the fourth data as data to be drawn, performing depth test on the data to be drawn according to projection parameters in the process of drawing, drawing to obtain each pixel point in the image, and recording the depth value of the current mapping area.
7. The method of claim 5, wherein projecting the second data into the three-dimensional virtual space for presentation according to a proposed projection strategy comprises:
acquiring first data containing the normal information;
acquiring fourth data containing the augmentation information;
taking a data set formed by the first data and the fourth data as data to be drawn, extracting a projection texture mapping strategy, and judging whether coordinate system conversion is needed at present when the data to be drawn is mapped according to the projection texture mapping strategy;
when the coordinate system conversion is needed, firstly, the coordinate system conversion is carried out, after the coordinate system conversion is successful, the data to be drawn are drawn, and the texture coordinate corresponding to each coloring point in the data to be drawn is determined in real time;
and performing color adding processing on the points needing to be colored in the data to be drawn according to the texture coordinates to obtain the third data.
8. The method according to claim 7, wherein performing color addition processing on the point to be rendered in the data to be rendered, which needs to be rendered, according to the texture coordinates comprises:
judging according to the texture coordinates corresponding to each coloring point to determine whether the points needing coloring in the data to be drawn are in a projection area, if so, determining the color values corresponding to the points needing coloring in the data to be drawn according to the comparison result obtained by comparing the first depth values of the points needing coloring in the data to be drawn in the projector coordinate system with the second depth values in the depth map, and performing the color adding processing on the points needing coloring in the data to be drawn according to the color values;
the depth map is an image obtained by drawing when depth test is carried out in the process of drawing the data to be drawn according to the projection parameters.
9. A terminal, characterized in that the terminal comprises:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring first data, and the first data is used for identifying the position of at least one sampling point in an acquired area in a three-dimensional space;
the space modeling unit is used for establishing a three-dimensional virtual space according to the first data;
the acquisition unit is used for acquiring second data, and the second data is used for representing media information obtained by presenting the target object in a two-dimensional space in the acquired area;
the projection unit is used for projecting the second data into the three-dimensional virtual space for display according to the extracted projection strategy to obtain a display result containing third data; the third data is color information for distinguishing the attribute of the marker, and the display result comprises position information of the target object and first data attached with the color information;
and the identification unit is used for distinguishing the mark attribute information of the target object by corresponding color identification according to the third data.
10. The terminal of claim 9, wherein the terminal further comprises:
and the preprocessing unit is used for carrying out data re-blocking preprocessing operation on the first data.
11. The terminal of claim 10, wherein the preprocessing unit is further configured to:
acquiring the acquisition area where the first data is located;
dividing the acquisition region according to the designated region division parameters to obtain at least two first target regions;
expanding the boundaries of the at least two first target areas according to the area boundary enhancement parameters to obtain at least two second target areas;
and denoising and normal estimation are carried out on each sampling point in the at least two second target regions by taking k neighborhood points as references to obtain normal information, and the normal information is added into the first data.
12. The terminal of claim 11, wherein the preprocessing unit is further configured to:
after the denoising and the normal estimation are carried out, when each second target area of the at least two second target areas is processed, the size parameter of the first target area is obtained;
and cutting each second target area according to the size parameter of the first target area so as to delete the redundant area beyond the size range of the first target area.
13. The terminal according to claim 11 or 12, characterized in that the terminal further comprises: an encryption unit to:
acquiring first data containing the normal information;
and carrying out data encryption processing on the first data according to the normal information to obtain fourth data containing supplementary information.
14. The terminal of claim 13, wherein the terminal further comprises: a depth test unit to:
acquiring first data containing the normal information;
acquiring fourth data containing the augmentation information;
and taking a data set formed by the first data and the fourth data as data to be drawn, performing depth test on the data to be drawn according to projection parameters in the process of drawing, drawing to obtain each pixel point in the image, and recording the depth value of the current mapping area.
15. The terminal of claim 13, wherein the projection unit is further configured to:
acquiring first data containing the normal information;
acquiring fourth data containing the augmentation information;
taking a data set formed by the first data and the fourth data as data to be drawn, and judging whether coordinate system conversion is needed at present when the data to be drawn is mapped according to a projection texture mapping strategy according to the projection texture mapping strategy;
when the coordinate system conversion is needed, firstly, the coordinate system conversion is carried out, after the coordinate system conversion is successful, the data to be drawn are drawn, and the texture coordinate corresponding to each coloring point in the data to be drawn is determined in real time;
and performing color adding processing on the points needing to be colored in the data to be drawn according to the texture coordinates to obtain the third data.
16. The terminal of claim 15, wherein the projection unit is further configured to:
judging according to the texture coordinates corresponding to each coloring point to determine whether the points needing coloring in the data to be drawn are in a projection area, if so, determining the color values corresponding to the points needing coloring in the data to be drawn according to the comparison result obtained by comparing the first depth values of the points needing coloring in the data to be drawn in the projector coordinate system with the second depth values in the depth map, and performing the color adding processing on the points needing coloring in the data to be drawn according to the color values;
the depth map is an image obtained by drawing when depth test is carried out in the process of drawing the data to be drawn according to the projection parameters.
CN201710150737.5A 2017-03-14 2017-03-14 Display method of mark data and terminal Active CN108573522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710150737.5A CN108573522B (en) 2017-03-14 2017-03-14 Display method of mark data and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710150737.5A CN108573522B (en) 2017-03-14 2017-03-14 Display method of mark data and terminal

Publications (2)

Publication Number Publication Date
CN108573522A CN108573522A (en) 2018-09-25
CN108573522B true CN108573522B (en) 2022-02-25

Family

ID=63578497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710150737.5A Active CN108573522B (en) 2017-03-14 2017-03-14 Display method of mark data and terminal

Country Status (1)

Country Link
CN (1) CN108573522B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179152B (en) * 2018-11-12 2023-04-28 阿里巴巴集团控股有限公司 Road identification recognition method and device, medium and terminal
CN109814137B (en) * 2019-02-26 2023-07-14 腾讯科技(深圳)有限公司 Positioning method, positioning device and computing equipment
CN111754564B (en) * 2019-03-28 2024-02-20 杭州海康威视系统技术有限公司 Video display method, device, equipment and storage medium
CN111935153B (en) * 2020-08-11 2022-04-26 北京天融信网络安全技术有限公司 CAN bus-based target message extraction method and device and storage medium
CN113852829A (en) * 2021-09-01 2021-12-28 腾讯科技(深圳)有限公司 Method and device for encapsulating and decapsulating point cloud media file and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6268862B1 (en) * 1996-03-08 2001-07-31 Canon Kabushiki Kaisha Three dimensional virtual space generation by fusing images
CN103093191A (en) * 2012-12-28 2013-05-08 中电科信息产业有限公司 Object recognition method with three-dimensional point cloud data and digital image data combined
US20130121564A1 (en) * 2010-07-05 2013-05-16 Kabushiki Kaisha Topcon Point cloud data processing device, point cloud data processing system, point cloud data processing method, and point cloud data processing program
CN105512646A (en) * 2016-01-19 2016-04-20 腾讯科技(深圳)有限公司 Data processing method, data processing device and terminal
CN105913485A (en) * 2016-04-06 2016-08-31 北京小小牛创意科技有限公司 Three-dimensional virtual scene generation method and device
CN106204656A (en) * 2016-07-21 2016-12-07 中国科学院遥感与数字地球研究所 Target based on video and three-dimensional spatial information location and tracking system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6268862B1 (en) * 1996-03-08 2001-07-31 Canon Kabushiki Kaisha Three dimensional virtual space generation by fusing images
US20130121564A1 (en) * 2010-07-05 2013-05-16 Kabushiki Kaisha Topcon Point cloud data processing device, point cloud data processing system, point cloud data processing method, and point cloud data processing program
CN103093191A (en) * 2012-12-28 2013-05-08 中电科信息产业有限公司 Object recognition method with three-dimensional point cloud data and digital image data combined
CN105512646A (en) * 2016-01-19 2016-04-20 腾讯科技(深圳)有限公司 Data processing method, data processing device and terminal
CN105913485A (en) * 2016-04-06 2016-08-31 北京小小牛创意科技有限公司 Three-dimensional virtual scene generation method and device
CN106204656A (en) * 2016-07-21 2016-12-07 中国科学院遥感与数字地球研究所 Target based on video and three-dimensional spatial information location and tracking system and method

Also Published As

Publication number Publication date
CN108573522A (en) 2018-09-25

Similar Documents

Publication Publication Date Title
CN108573522B (en) Display method of mark data and terminal
US9489766B2 (en) Position searching method and apparatus based on electronic map
US9710946B2 (en) Method and apparatus for displaying point of interest
EP1840517B1 (en) Real-time spherical correction of map data
CN107993282B (en) Dynamic measurable live-action map making method
US10354433B2 (en) Method and apparatus for generating an abstract texture for a building facade or model
WO2017067390A1 (en) Method and terminal for obtaining depth information of low-texture regions in image
US20190101407A1 (en) Navigation method and device based on augmented reality, and electronic device
US10740946B2 (en) Partial image processing method, device, and computer storage medium
EP3885871A1 (en) Surveying and mapping system, surveying and mapping method and apparatus, device and medium
CN105318881A (en) Map navigation method, and apparatus and system thereof
CN103514626A (en) Method and device for displaying weather information and mobile terminal
CN109165606B (en) Vehicle information acquisition method and device and storage medium
CN112883900B (en) Method and device for bare-ground inversion of visible images of remote sensing images
US10607385B2 (en) Augmented reality positioning and tracking system and method
CN115546377A (en) Video fusion method and device, electronic equipment and storage medium
US20130141433A1 (en) Methods, Systems and Computer Program Products for Creating Three Dimensional Meshes from Two Dimensional Images
AU2018450016B2 (en) Method and apparatus for planning sample points for surveying and mapping, control terminal and storage medium
CN108072375B (en) Information identification method in navigation and terminal
CN111354037A (en) Positioning method and system
EP3326404B1 (en) Evaluating near range communications quality
JP5271965B2 (en) Object distribution range setting device and object distribution range setting method
CN108961742B (en) Road condition road network information processing method, server and computer storage medium
CN115527028A (en) Map data processing method and device
CN111950356B (en) Seal text positioning method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant