CN107766367B - Automatic checking method and device for data matching - Google Patents

Automatic checking method and device for data matching Download PDF

Info

Publication number
CN107766367B
CN107766367B CN201610687167.9A CN201610687167A CN107766367B CN 107766367 B CN107766367 B CN 107766367B CN 201610687167 A CN201610687167 A CN 201610687167A CN 107766367 B CN107766367 B CN 107766367B
Authority
CN
China
Prior art keywords
data
identifier
data identifier
voice
graphic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610687167.9A
Other languages
Chinese (zh)
Other versions
CN107766367A (en
Inventor
刘娟
王晓娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Navinfo Co Ltd
Original Assignee
Navinfo Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Navinfo Co Ltd filed Critical Navinfo Co Ltd
Priority to CN201610687167.9A priority Critical patent/CN107766367B/en
Publication of CN107766367A publication Critical patent/CN107766367A/en
Application granted granted Critical
Publication of CN107766367B publication Critical patent/CN107766367B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Abstract

The application discloses an automatic checking method and device for data matching, wherein the method comprises the following steps: reading preset path information for storing a basic data table and preset path information for storing a voice graph identification list, respectively obtaining the basic data table and the voice graph identification list, extracting a first voice data identification and a first graph data identification which are contained in the basic data table and are related to basic data, and a second voice data identification and a second graph data identification which are contained in the voice graph identification list, respectively matching the first voice data identification and the first graph data identification with the second voice data identification and the second graph data identification, and automatically checking data matching according to a matching result. By the method, the efficiency of automatic inspection of data matching can be effectively improved, and the accuracy of automatic inspection of data matching is also improved.

Description

Automatic checking method and device for data matching
Technical Field
The present application relates to the field of electronic map technologies, and in particular, to an automatic checking method and an automatic checking device for data matching.
Background
With the continuous development of network technology, services provided by service providers for users are more and more diversified, for example, map navigation services are provided for users.
At present, in order to provide a more accurate map navigation service for a user, not only background, roads and information points (i.e., basic data) but also voice data and graphic data are generally collected (since subsequent basic data respectively establish a corresponding relationship with the voice data and the graphic data, and the voice data itself does not have a direct corresponding relationship with the graphic data, hereinafter referred to as "language and graphic data"), and then, an electronic map needs to be established according to the collected data.
In the whole process of establishing the electronic map, in the prior art, basic data and corresponding language map data need to be matched and associated, the basic data are converted according to the requirements of a user in a data conversion stage, and then the converted basic data and the language map data are compiled into data which can be identified by navigation equipment of the user through a compiling tool.
However, in the whole process of building the electronic map, the basic data and the language and graphic data are not matched, that is, the basic data does not have corresponding language and graphic data, or the language and graphic data does not have corresponding basic data, for example, in the conversion stage, the requirement of the user is to shield roads in all cells, so that, according to the requirement of the user, when the basic data is converted by the conversion tool, the basic data corresponding to the roads in the cells are screened, but the language and graphic data are not screened, so that the language and graphic data do not have corresponding basic data, and the final electronic map needs the basic data and the language and graphic data to be in one-to-one correspondence, that is, the basic data all have corresponding language and graphic data, and the language and graphic data also all have corresponding basic data, so that the subsequent manual inspection is needed.
Obviously, in the prior art, basic data and language and graphic data usually exist in large quantities, and manual checking of whether the basic data and the language and graphic data correspond to each other one by one inevitably results in low checking efficiency, and meanwhile, errors are easy to occur.
Disclosure of Invention
In view of this, embodiments of the present invention provide an automatic checking method and an automatic checking device for data matching, so as to solve the problems of low checking efficiency and easy occurrence of errors in the existing data checking process.
In one aspect, to solve the above technical problem, an embodiment of the present invention provides an automatic checking method for data matching, where the method includes:
reading path information of a preset storage basic data table in a configuration file, and acquiring a basic data table established according to a preset data structure format;
acquiring a pre-established voice graphic identifier list according to preset path information of a stored voice graphic identifier list;
extracting a first voice data identifier and a first graphic data identifier which are contained in the basic data table and are related to basic data, and extracting a second voice data identifier and a second graphic data identifier which are contained in a voice graphic identifier list;
matching the first voice data identifier and the first graphic data identifier with the second voice data identifier and the second graphic data identifier respectively to determine a matching result;
and automatically checking the data matching according to the matching result.
Accordingly, to implement the foregoing method, an embodiment of the present invention provides an automatic inspection apparatus for data matching, where the apparatus includes:
the acquisition module is used for reading the path information of a preset storage basic data table in the configuration file and acquiring the basic data table established according to a preset data structure format;
the acquisition module is also used for acquiring a pre-established voice graphic identifier list according to the preset path information of the stored voice graphic identifier list;
the extraction module is used for extracting a first voice data identifier and a first graphic data identifier which are contained in the basic data table and are related to the basic data, and a second voice data identifier and a second graphic data identifier which are contained in the voice graphic identifier list;
the matching module is used for matching the first voice data identifier and the first graphic data identifier with the second voice data identifier and the second graphic data identifier respectively;
and the checking module is used for automatically checking the data matching according to the matching result.
The embodiment of the invention provides an automatic checking method and a device for data matching, wherein the method comprises the following steps: reading path information of a preset storage basic data table in a configuration file, acquiring a basic data table established according to a preset data structure format, acquiring a preset voice graph identifier list according to the path information of the preset storage voice graph identifier list, extracting a first voice data identifier and a first graph data identifier which are contained in the basic data table and are associated with basic data, and a second voice data identifier and a second graph data identifier which are contained in the voice graph identifier list, matching the first voice data identifier and the first graph data identifier with the second voice data identifier and the second graph data identifier respectively, and automatically checking data matching according to a matching result. By the method, the automatic checking of the matching of the voice data without the corresponding basic data and the graphic data can be realized, or the automatic checking of the matching of the basic data without the corresponding voice data and the graphic data can be realized, so that the automatic checking efficiency of the data matching is greatly improved, and the automatic checking accuracy of the data matching is also improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is an automatic inspection process for data matching provided by an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an automatic inspection apparatus for data matching according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to further embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic view of an automatic checking process for data matching according to an embodiment of the present invention, where the method includes the following steps:
s101: and reading path information of a storage basic data table preset in the configuration file, and acquiring the basic data table established according to a preset data structure format.
S102: and acquiring a pre-established voice graphic identifier list according to preset path information of the stored voice graphic identifier list.
S103: and extracting a first voice data identifier and a first graphic data identifier which are contained in the basic data table and are associated with the basic data, and a second voice data identifier and a second graphic data identifier which are contained in the voice graphic identifier list.
In practical application, in order to better provide more accurate map navigation service for users, in the process of establishing the whole electronic map, a service provider generally needs to collect not only basic data such as roads, information points and the like, for example, a certain XXX expressway, but also data for making voice data and data for making graphic data, for example, a name "XXX" of the XXX expressway is collected, the voice data is made according to the name "XXX" in the later period, a direction board at a certain fork in the XXX expressway is collected, and the graphic data is made according to the direction board in the later period.
Further, since it is necessary that during the process of using the electronic navigation by the user, in some cases, the electronic map displays the current location of the user and the surrounding area of the location for the user (for example, the user drives a vehicle and has a fork about to travel to a certain road for fifty meters, and the electronic map displays the current location of the user and the surrounding area of the location for the user), if the surrounding area of the location contains a location requiring voice prompt (for example, the user needs voice prompt at a fork five ten meters ahead of the user), the electronic map needs to prompt the name of the location by voice, and if the location requiring voice prompt has a direction sign such as a direction sign, it is necessary to make the direction sign graphic data of the location requiring voice prompt (for example, the user includes a direction sign at a fork five meters ahead of the user) to be displayed to the user through the electronic map, while in most cases the underlying data is typically stored in a database, the voice data and the graphical data are typically stored locally, it is obvious that the underlying data is stored in one place, while the voice data and the graphical data are stored in another place.
To sum up, when the electronic map displays a certain position including voice data and graphic data, voice data and graphic data need to be found from other places and retrieved, and if fifty meters of the user is about to drive an intersection on a certain road (the intersection includes the voice data and the graphic data), when the electronic map displays the position of the current user and the intersections included in the peripheral area of the position for the user, the electronic map needs to locally acquire the voice data including the name of the intersection and the graphic data including the direction board corresponding to the intersection, prompt the name of the intersection for the user voice, and display the direction board of the current intersection for the user.
Therefore, in the embodiment, the corresponding relationship between the basic data and the voice data and between the basic data and the graphic data can be established by adding the identifier to the basic data, the voice data and the graphic data, and the corresponding relationship between the basic data and the voice data and between the basic data and the graphic data is also stored while the basic data is stored in the database, so that when the electronic map displays the basic data for the user when the electronic map navigates the user, the corresponding voice data and the graphic data are searched according to the identifier of the basic data and the voice data and the identifier of the basic data and the graphic data which are established in advance, and the voice data and the graphic data are provided for the user.
It should be noted that, the electronic map includes many basic data, but not every basic data has corresponding voice data and graphic data, that is, some basic data has corresponding voice data, some basic data has corresponding graphic data, some basic data has corresponding voice data and graphic data, and some basic data has no corresponding voice data or graphic data.
In addition, when a corresponding relationship is established for a certain basic data, the identifiers used in the corresponding relationship between the basic data and the voice data and between the basic data and the graphic data may be the same identifier, but in this embodiment, different identifiers may be added to the basic data, the voice data, and the graphic data respectively for the corresponding relationship between the basic data and the voice data and between the basic data and the graphic data, but the corresponding relationship between the basic data and the voice data and between the basic data and the graphic data must be added to the database, and further added to the basic data table in the database, so that it can be ensured that the voice data and the graphic data corresponding to the basic data are found according to the corresponding relationship.
As an optional implementation manner, based on the foregoing embodiment, in order to conveniently manage and query locally stored voice data and graphics data, in this embodiment, a voice graphics identifier list needs to be established according to the identifier of the voice data and the graphics data, and this embodiment also provides a manner of further establishing the voice graphics identifier list: acquiring a voice data file and a graphic data file, extracting a voice data identifier in the voice data file and a graphic data identifier in the graphic data file, and adding the voice data identifier and the graphic data identifier to a voice graphic identifier list.
In the actual process of building the electronic map, due to some circumstances (e.g., user's request), after being processed in some stages (e.g., conversion stage), there may be a case where there is basic data having corresponding voice data or graphic data, but there is no corresponding voice data or graphic data locally, or there is voice data or graphic data having corresponding basic data, but there is no corresponding basic data, that is, the basic data does not match with the voice data or the graphic data, in fact, the electronic map finally provided to the user requires one-to-one correspondence between basic data and voice data or graphic data, and therefore, in the present embodiment, it is necessary that after some stages are finished, the basic data, the voice data, and the graphic data are checked to determine whether the basic data corresponds to the voice data or the graphic data one to one.
In the whole checking process, firstly, path information of a storage basic data table preset in a configuration file needs to be read, the basic data table established according to a preset data structure format is obtained, a first voice data identifier and a first graphic data identifier are extracted from the basic data table containing the first voice data identifier and the first graphic data identifier associated with basic data, meanwhile, a preset voice graphic identifier list is obtained according to the path information of the preset storage voice graphic identifier list, and a second voice data identifier and a second graphic data identifier are extracted from the voice graphic identifier list.
The obtaining and extracting of the first voice data identifier, the first graphic data identifier, the second voice data identifier and the second graphic data identifier may be performed by a voice graphic automatic matching inspection tool (hereinafter referred to as a "tool", that is, an automatic inspection apparatus for data matching).
It should be noted that the aforementioned association actually refers to a corresponding relationship between basic data and voice data or between basic data and graphics data, that is, a corresponding relationship between an identifier of basic data and an identifier of voice data or between an identifier of basic data and an identifier of graphics data, in order to better and clearly illustrate the technical solution of this embodiment, an identifier of voice data involved in a corresponding relationship included in a basic data table is defined as a first voice data identifier, an identifier of graphics data involved in a corresponding relationship included in a basic data table is defined as a first graphics data identifier, an identifier of voice data included in a voice graphics identifier list is defined as a second voice data identifier, an identifier of graphics data included in a voice graphics identifier list is defined as a second graphics data identifier, if a certain basic data corresponds to voice data, that is, the first voice data id corresponding to the basic data in the basic data table is identical to the second voice data id corresponding to the voice data, and it can be similarly obtained if a certain basic data is corresponding to the graphic data, that is, the first graphic data id corresponding to the basic data in the basic data table is identical to the second graphic data id corresponding to the graphic data,
for example, for the sake of simplicity and clarity, it is assumed that the corresponding relationship stored in the basic data table is shown in table 1:
Figure BDA0001083180520000071
TABLE 1
The list of phonetic graphic identifications is shown in table 2:
second voice data identifier AA
Second graphic data identification BB
Second voice data identity CC
Second graphic data identification DD
Second voice data identification EE
Second graphic data flag FF
TABLE 2
The tool reads path information of a storage basic data table preset in a configuration file, acquires the basic data table established according to a preset data structure format, and extracts a first voice data identifier and a first graphic data identifier, as shown in table 3:
first voice data identifier AA
First graphic data identification BB
First voice data identity CC
First graphic data identification DD
TABLE 3
Meanwhile, the tool may obtain a pre-established voice pattern identifier list according to preset path information of the stored voice pattern identifier list, and extract the second voice data identifier and the second pattern data identifier, as shown in table 2.
S104: and matching the first voice data identifier and the first graphic data identifier with the second voice data identifier and the second graphic data identifier respectively to determine a matching result.
In this embodiment, after the first voice data identifier and the first graphic data identifier are extracted from the basic data table and the second voice data identifier and the second graphic data identifier are extracted from the voice graphic identifier list, the first voice data identifier and the first graphic data identifier may be respectively matched with the second voice data identifier and the second graphic data identifier.
In the whole matching process, this embodiment provides three matching manners, the first matching manner is to use the second voice data identifier and the second graphic data identifier included in the voice graphic identifier list as standard data, that is, the data in the voice graphic identifier list are all correct data, and the data included in the basic data table is data to be checked, for example, to use the second voice data identifier as the standard, match the first voice data identifier with the second voice data identifier, use the first voice data identifier that is not matched with the second voice data identifier and the second voice data identifier that is not matched with the first voice data identifier as matching results, use the second graphic data identifier as the standard, match the first graphic data identifier with the second graphic data identifier, and use the first graphic data identifier that is not matched with the second graphic data identifier and the second graphic data identifier that is not matched with the first graphic data identifier as matching results Is the matching result.
As an alternative embodiment, continuing the above example, assuming that the second voice data identifier and the second pattern data identifier contained in the voice pattern identifier list (i.e., table 2) are used as standard data, the second voice data identifier is used as a standard, the first voice data identifier is matched with the second voice data identifier, the first voice data identifier not matched with the second voice data identifier and the second voice data identifier not matched with the first voice data identifier are used as matching results, the second pattern data identifier is used as a standard, the first pattern data identifier not matched with the second pattern data identifier and the second pattern data identifier not matched with the first pattern data identifier are used as matching results, as shown in table 4:
second voice data identification EE
Second graphic data flag FF
TABLE 4
It should be noted that, if the first matching method is used, a program instruction "set the second voice data identifier and the second graphic data identifier included in the voice graphic identifier list as standard data" is written in advance in a built-in program of the tool before performing automatic checking of data matching, and subsequently, in the process of performing automatic checking of data matching, after the tool acquires the pre-established voice graphic identifier list, the tool directly takes the second voice data identifier and the second graphic data identifier included in the voice graphic identifier list as standard data, matches the first voice data identifier with the second voice data identifier, and matches the first graphic data identifier with the second graphic data identifier.
The second matching method uses the first voice data identifier and the first graphic data identifier extracted from the basic data table as standard data, that is, the data extracted from the basic data table are all correct data, the data included in the voice pattern identifier list is the data to be checked, for example, the first voice data identifier is taken as a standard, the second voice data identifier is matched with the first voice data identifier, the second voice data identifier which is not matched with the first voice data identifier and the first voice data identifier which is not matched with the second voice data identifier are taken as matching results, and matching the second graphic data identifier with the first graphic data identifier by taking the first graphic data identifier as a standard, and taking the second graphic data identifier which is not matched with the first graphic data identifier and the first graphic data identifier which is not matched with the second graphic data identifier as matching results.
As an alternative embodiment, continuing the example in steps S101 to S103, assuming that the first voice data flag and the first graphic data flag (i.e., table 3) extracted from the basic data table are taken as standard data, matching the second voice data identifier with the first voice data identifier by taking the first voice data identifier as a standard, taking the second voice data identifier which is not matched with the first voice data identifier and the first voice data identifier which is not matched with the second voice data identifier as matching results, and matching the second graphic data identifier with the first graphic data identifier by taking the first graphic data identifier as a standard, and taking the second graphic data identifier which is not matched with the first graphic data identifier and the first graphic data identifier which is not matched with the second graphic data identifier as matching results, wherein all the matching results are shown in a table 4.
It should be noted that, if the second matching method is used, a program instruction "set the first voice data identifier and the first graph data identifier associated with the base data included in the base data table as standard data" in advance in the built-in program of the tool before performing the automatic check of data matching is required, and then, in the process of performing the automatic check of data matching, after the tool acquires the base data table established according to the preset data structure format, the tool directly takes the first voice data identifier and the first graph data identifier associated with the base data included in the base data table as standard data, matches the second voice data identifier with the first voice data identifier, and matches the second graph data identifier with the first graph data identifier.
As an alternative implementation manner, in the foregoing embodiment, the third matching manner is to extract the first voice data identifier and the first graphic data identifier from the basic data table, and the second voice data identifier and the second graphic data identifier in the voice graphic identifier list are both used as the data to be checked, that is, the table 3 and the table 2 are compared with each other, for example, the first voice data identifier and the first graphic data identifier are respectively matched with the second voice data identifier and the second graphic data identifier, the first voice data identifier which is not matched with the second voice data identifier is used as a matching result, the first graphic data identifier which is not matched with the second graphic data identifier is used as a matching result, the second voice data identifier which is not matched with the first voice data identifier is used as a matching result, and the second graphic data identifier which is not matched with the first graphic data identifier is used as a matching result.
It should be noted that, in the third matching method, in the matching process, the first voice data identifier and the first graphic data identifier are used as the standard, the second voice data identifier and the second graphic data identifier are compared with the standard, and the matching result is determined, or the second voice data identifier and the second graphic data identifier are used as the standard, and the first voice data identifier and the first graphic data identifier are compared with the standard, and the matching result is determined.
As an alternative embodiment, the example continuing in steps S101 to S103 assumes that the first voice data identifier and the first pattern data identifier (i.e., table 3) extracted from the basic data table, and the second voice data identifier and the second pattern data identifier in the voice pattern identifier list (i.e., table 2) are used as data to be checked, the first voice data identifier and the first pattern data identifier are matched with the second voice data identifier and the second pattern data identifier, respectively, the first voice data identifier not matched with the second voice data identifier is used as a matching result, the first pattern data identifier not matched with the second pattern data identifier is used as a matching result, the second voice data identifier not matched with the first voice data identifier is used as a matching result, and the second pattern data identifier not matched with the first pattern data identifier is used as a matching result, all matching results are shown in table 4.
S105: and automatically checking the data matching according to the matching result.
In this embodiment, after determining the first voice data identifier, the first graph data identifier, the second voice data identifier, and the second graph data identifier (i.e., the matching result) that are not matched according to any one of the three matching manners, it is necessary to automatically check data matching according to the matching result. When the automatic checking of the data matching is carried out, if the data matching is completely matched, the automatic checking result is correct, otherwise, the automatic checking result of the data matching can be prompted to be wrong, so that the data processing can be carried out automatically or manually.
Further, if the first voice data identifier not matched with the second voice data identifier, the second voice data identifier not matched with the first voice data identifier, the first graphic data identifier not matched with the second graphic data identifier and the second graphic data identifier not matched with the first graphic data identifier are determined according to the first matching mode, the basic data associated with the first voice data identifier and the basic data associated with the first graphic data identifier can be determined directly according to the first voice data identifier not matched with the first voice data identifier and the first graphic data identifier, the determined basic data associated with the first voice data identifier and the determined basic data associated with the first graphic data identifier are deleted, and according to the second voice data identifier not matched with the first voice data identifier and the second graphic data identifier not matched with the first graphic data identifier, determining basic data corresponding to the second voice data identifier and basic data corresponding to the second graphic data identifier; and adding the determined basic data into a basic data table.
As an alternative embodiment, in the example related to the first matching method in the continuation step S104, it is assumed that the basic data corresponding to the second voice data identifier EE is the basic data E, and the basic data corresponding to the second graph data identifier FF is the basic data F, so the tool adds the basic data E corresponding to the second voice data identifier EE and the basic data F corresponding to the second graph data identifier FF to the basic data table (i.e., table 1), as shown in table 5:
Figure BDA0001083180520000121
TABLE 5
If the voice data is determined according to the second matching mode, the voice data corresponding to the first voice data identifier and the figure data corresponding to the first figure data identifier are determined in the area for storing the voice figure data according to the second voice data identifier not matched with the first voice data identifier, the first voice data identifier not matched with the second voice data identifier, the second figure data identifier not matched with the first figure data identifier and the first figure data identifier not matched with the second figure data identifier, the voice data corresponding to the second voice data identifier and the figure data corresponding to the second figure data identifier can be determined directly according to the second voice data identifier not matched with the second voice data identifier and the second figure data identifier, the determined voice data and the determined figure data are deleted, and the voice data corresponding to the first voice data identifier and the figure data corresponding to the first figure data identifier are determined according to the first voice data identifier not matched with the second voice data identifier and the first figure data identifier not matched with the second figure data identifier Accordingly, the determined voice data and the graphic data are added to the area storing the voice graphic data.
As an alternative implementation, in the example related to the first matching manner in continuation step S104, after determining the data in table 4, the tool directly deletes the voice data corresponding to the second voice data identifier EE and the second graphic data identifier FF.
In addition, if the unmatched first voice data identifier, first graphic data identifier, second voice data identifier and second graphic data identifier are determined according to the third matching mode, determining voice data corresponding to a first voice data identifier and basic data associated with the first voice data identifier, graph data corresponding to the first graph data identifier and basic data associated with the first graph data identifier, voice data corresponding to a second voice data identifier and basic data corresponding to the second voice data identifier, graph data corresponding to the second graph data identifier and basic data corresponding to the second graph data identifier according to a first voice data identifier, a first graph data identifier, a second voice data identifier and a second graph data identifier which are not matched;
judging whether to add voice data corresponding to the first voice data identifier or delete basic data associated with the first voice data identifier according to preset standard data;
judging whether to add the graphic data corresponding to the first graphic data identifier or delete the basic data associated with the first graphic data identifier according to preset standard data;
according to preset standard data, judging whether to delete the voice data corresponding to the second voice data identifier or to add basic data corresponding to the second voice data identifier;
according to preset standard data, judging whether to delete the graphic data corresponding to the second graphic data identifier or to add basic data corresponding to the second graphic data identifier;
according to the judgment result, adding voice data corresponding to the first voice data identifier or deleting basic data associated with the first voice data identifier, adding first graphic data corresponding to the first graphic data identifier or deleting basic data associated with the first graphic data identifier, adding second voice data corresponding to the second voice data identifier or deleting basic data corresponding to the second voice data identifier, and adding second graphic data corresponding to the second graphic data identifier or deleting basic data corresponding to the second graphic data identifier.
It should be noted that the standard data is set in advance, and before performing the automatic check of data matching, a program instruction of "which data identifiers are set as standard data" is written in advance in the built-in program of the tool, for example, some first voice data identifiers in the basic data and some second voice data identifiers in the voice pattern identifier list may be used as standard data, and then, in the process of performing the automatic check of data matching, the four determinations are performed according to the preset standard data.
By the method, the automatic checking of the matching of the voice data without the corresponding basic data and the graphic data can be realized, or the automatic checking of the matching of the basic data without the corresponding voice data and the graphic data can be realized, so that the automatic checking efficiency of the data matching is greatly improved, and the automatic checking accuracy of the data matching is also improved.
In practical applications, after performing automatic data matching check on the voice pattern and the basic data in steps S101 to S105, map data for electronic navigation may be generated based on the basic data, the second voice data, and the second pattern data after completing the automatic data matching check.
Based on the same inventive concept, the embodiment of the present invention further provides an automatic inspection apparatus for data matching, as shown in fig. 2:
fig. 2 is a schematic structural diagram of an automatic checking apparatus for data matching according to an embodiment of the present invention, where the apparatus includes:
an obtaining module 201, configured to read path information of a storage basic data table preset in a configuration file, and obtain the basic data table established according to a preset data structure format; acquiring a pre-established voice graphic identifier list according to preset path information of a stored voice graphic identifier list;
an extracting module 202, configured to extract a first voice data identifier and a first graph data identifier that are included in the basic data table and are associated with the basic data, and a second voice data identifier and a second graph data identifier that are included in the voice graph identifier list;
the matching module 203 is configured to match the first voice data identifier and the first graphic data identifier with the second voice data identifier and the second graphic data identifier, respectively;
and the checking module 204 is configured to perform automatic checking on data matching according to a matching result.
As an optional implementation manner, the matching module 203 described in the above embodiment includes a first matching unit 2031 and a second matching unit 2032; wherein the content of the first and second substances,
the first matching unit 2031 is configured to: matching the first voice data identifier with the second voice data identifier by taking the second voice data identifier as a standard; taking a first voice data identifier which is not matched with the second voice data identifier and a second voice data identifier which is not matched with the first voice data identifier as matching results;
the second matching unit 2032 is configured to match the first graphic data identifier with the second graphic data identifier using the second graphic data identifier as a standard; and taking the first graphic data identifier which is not matched with the second graphic data identifier and the second graphic data identifier which is not matched with the first graphic data identifier as matching results.
As an optional implementation manner, the first matching unit 2031 described in the foregoing embodiment is configured to: matching the second voice data identifier with the first voice data identifier by taking the first voice data identifier as a standard; taking a second voice data identifier which is not matched with the first voice data identifier and a first voice data identifier which is not matched with the second voice data identifier as matching results;
the second matching unit 2032 is configured to: matching the second graphic data identifier with the first graphic data identifier by taking the first graphic data identifier as a standard; and taking the second graphic data identifier which is not matched with the first graphic data identifier and the first graphic data identifier which is not matched with the second graphic data identifier as matching results.
As an optional implementation manner, the checking module 204 described in the foregoing embodiment is further configured to:
1) when a first voice data identifier which is not matched with a second voice data identifier, a second voice data identifier which is not matched with the first voice data identifier, a first graph data identifier which is not matched with a second graph data identifier and a second graph data identifier which is not matched with the first graph data identifier are used as matching results, determining basic data associated with the first voice data identifier and basic data associated with the first graph data identifier according to the first voice data identifier which is not matched with the first voice data identifier and the first graph data identifier; deleting the determined basic data associated with the first voice data identifier and the determined basic data associated with the first graphic data identifier; determining basic data corresponding to the second voice data identifier and basic data corresponding to the second graph data identifier according to a second voice data identifier which is not matched with the first voice data identifier and a second graph data identifier which is not matched with the first graph data identifier; adding the determined basic data into a basic data table;
2) when a second voice data identifier which is not matched with the first voice data identifier, a first voice data identifier which is not matched with the second voice data identifier, a second graph data identifier which is not matched with the first graph data identifier and a first graph data identifier which is not matched with the second graph data identifier are used as matching results, determining voice data corresponding to the second voice data identifier and graph data corresponding to the second graph data identifier in a region for storing voice graph data according to the second voice data identifier which is not matched with the second voice data identifier and the second graph data identifier; deleting the determined voice data and the determined graphic data, and determining the voice data corresponding to the first voice data identifier and the graphic data corresponding to the first graphic data identifier according to the first voice data identifier which is not matched with the second voice data identifier and the first graphic data identifier which is not matched with the second graphic data identifier; the determined voice data and the graphic data are added to the area storing the voice graphic data.
As an optional implementation manner, the checking module 204 described in the foregoing embodiment includes: determining unit 2041, determining unit 2042, and modifying unit 2043; wherein:
the determining unit 2041 is configured to: determining voice data corresponding to a first voice data identifier and basic data associated with the first voice data identifier, graph data corresponding to the first graph data identifier and basic data associated with the first graph data identifier, voice data corresponding to a second voice data identifier and basic data corresponding to the second voice data identifier, graph data corresponding to the second graph data identifier and basic data corresponding to the second graph data identifier according to a first voice data identifier, a first graph data identifier, a second voice data identifier and a second graph data identifier which are not matched;
the judging unit 2042 is configured to: judging whether to add voice data corresponding to the first voice data identifier or delete basic data associated with the first voice data identifier according to preset standard data; judging whether to add the graphic data corresponding to the first graphic data identifier or delete the basic data associated with the first graphic data identifier according to preset standard data; according to preset standard data, judging whether to delete the voice data corresponding to the second voice data identifier or to add basic data corresponding to the second voice data identifier; according to preset standard data, judging whether to delete the graphic data corresponding to the second graphic data identifier or to add basic data corresponding to the second graphic data identifier;
the modifying unit 2043 is configured to: according to the judgment result, adding voice data corresponding to the first voice data identifier or deleting basic data associated with the first voice data identifier, adding first graphic data corresponding to the first graphic data identifier or deleting basic data associated with the first graphic data identifier, adding second voice data corresponding to the second voice data identifier or deleting basic data corresponding to the second voice data identifier, and adding second graphic data corresponding to the second graphic data identifier or deleting basic data corresponding to the second graphic data identifier.
As an optional implementation manner, the apparatus for automatically checking data matching in the foregoing embodiment further includes:
the map generation module 205 is configured to: generating map data for electronic navigation according to the basic data, the second voice data and the second graphic data after completing data matching automatic check;
the training module 206 includes an intelligent training model 2061 and a training library 2062, and is used for self-learning and optimizing an automatic checking mode of data matching according to the graphics and voice data stored in the training library.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (10)

1. An automatic checking method for data matching is characterized by comprising the following steps:
reading path information of a preset storage basic data table in a configuration file, and acquiring a basic data table established according to a preset data structure format;
acquiring a pre-established voice graphic identifier list according to preset path information of a stored voice graphic identifier list;
extracting a first voice data identifier and a first graphic data identifier which are contained in the basic data table and are related to basic data, and extracting a second voice data identifier and a second graphic data identifier which are contained in a voice graphic identifier list; the first voice data identifier is defined as an identifier of corresponding voice data contained in the basic data table, the first graphic data identifier is defined as an identifier of corresponding graphic data contained in the basic data table, the second voice data identifier is defined as an identifier of voice data contained in the voice graphic identifier list, and an identifier of graphic data contained in the second graphic identifier voice graphic identifier list is defined as a second graphic data identifier;
matching the first voice data identifier and the first graphic data identifier with the second voice data identifier and the second graphic data identifier respectively to determine a matching result;
and automatically checking the data matching according to the matching result.
2. The method for automatically checking data matching according to claim 1, wherein the matching the first voice data id and the first graphic data id with the second voice data id and the second graphic data id, respectively, and determining the matching result further comprises:
matching the first voice data identifier with the second voice data identifier by taking the second voice data identifier as a standard; taking a first voice data identifier which is not matched with the second voice data identifier and a second voice data identifier which is not matched with the first voice data identifier as matching results;
matching the first graphic data identifier with the second graphic data identifier by taking the second graphic data identifier as a standard; taking a first graphic data identifier which is not matched with the second graphic data identifier and a second graphic data identifier which is not matched with the first graphic data identifier as matching results;
alternatively, the first and second electrodes may be,
matching the second voice data identifier with the first voice data identifier by taking the first voice data identifier as a standard; taking a second voice data identifier which is not matched with the first voice data identifier and a first voice data identifier which is not matched with the second voice data identifier as matching results;
matching the second graphic data identifier with the first graphic data identifier by taking the first graphic data identifier as a standard; and taking the second graphic data identifier which is not matched with the first graphic data identifier and the first graphic data identifier which is not matched with the second graphic data identifier as matching results.
3. The automatic checking method for data matching according to claim 1 or 2, characterized in that:
when the first voice data identifier not matched with the second voice data identifier, the second voice data identifier not matched with the first voice data identifier, the first graphic data identifier not matched with the second graphic data identifier, and the second graphic data identifier not matched with the first graphic data identifier are taken as matching results, the step of automatically checking data matching according to the matching results further comprises:
determining basic data associated with the first voice data identifier and basic data associated with the first graph data identifier according to the first voice data identifier and the first graph data identifier which are not matched; deleting the determined basic data associated with the first voice data identifier and the determined basic data associated with the first graphic data identifier;
determining basic data corresponding to the second voice data identifier and basic data corresponding to the second graph data identifier according to a second voice data identifier which is not matched with the first voice data identifier and a second graph data identifier which is not matched with the first graph data identifier; adding the determined basic data into a basic data table;
when the second voice data identifier not matched with the first voice data identifier, the first voice data identifier not matched with the second voice data identifier, the second graphic data identifier not matched with the first graphic data identifier, and the first graphic data identifier not matched with the second graphic data identifier are taken as matching results, the step of automatically checking data matching according to the matching results further comprises:
according to the unmatched second voice data identifier and second graphic data identifier, determining voice data corresponding to the second voice data identifier and graphic data corresponding to the second graphic data identifier in a region for storing voice graphic data; deleting the determined voice data and the determined graphic data;
determining voice data corresponding to the first voice data identifier and graphic data corresponding to the first graphic data identifier according to the first voice data identifier which is not matched with the second voice data identifier and the first graphic data identifier which is not matched with the second graphic data identifier; the determined voice data and the graphic data are added to the area storing the voice graphic data.
4. The automatic checking method for data matching according to any one of claims 1 or 2, wherein the automatic checking for data matching is performed according to the matching result, further comprising:
determining voice data corresponding to a first voice data identifier and basic data associated with the first voice data identifier, graph data corresponding to the first graph data identifier and basic data associated with the first graph data identifier, voice data corresponding to a second voice data identifier and basic data corresponding to the second voice data identifier, graph data corresponding to the second graph data identifier and basic data corresponding to the second graph data identifier according to a first voice data identifier, a first graph data identifier, a second voice data identifier and a second graph data identifier which are not matched;
judging whether to add voice data corresponding to the first voice data identifier or delete basic data associated with the first voice data identifier according to preset standard data;
judging whether to add the graphic data corresponding to the first graphic data identifier or delete the basic data associated with the first graphic data identifier according to preset standard data;
according to preset standard data, judging whether to delete the voice data corresponding to the second voice data identifier or to add basic data corresponding to the second voice data identifier;
according to preset standard data, judging whether to delete the graphic data corresponding to the second graphic data identifier or to add basic data corresponding to the second graphic data identifier;
and according to the judgment result, adding voice data corresponding to the first voice data identifier or deleting basic data associated with the first voice data identifier, adding first graphic data corresponding to the first graphic data identifier or deleting basic data associated with the first graphic data identifier, adding second voice data corresponding to the second voice data identifier or deleting basic data corresponding to the second voice data identifier, and adding second graphic data corresponding to the second graphic data identifier or deleting basic data corresponding to the second graphic data identifier.
5. The automatic checking method for data matching according to any one of claims 1 or 2, further comprising:
and generating map data for electronic navigation based on the basic data, the second voice data and the second graphic data after completing the automatic data matching check.
6. An automatic checking device for data matching, comprising:
the acquisition module is used for reading the path information of a preset storage basic data table in the configuration file and acquiring the basic data table established according to a preset data structure format; the voice graphic identification list is also used for acquiring a pre-established voice graphic identification list according to the preset path information of the stored voice graphic identification list;
the extraction module is used for extracting a first voice data identifier and a first graphic data identifier which are contained in the basic data table and are related to the basic data, and a second voice data identifier and a second graphic data identifier which are contained in the voice graphic identifier list; the first voice data identifier is defined as an identifier of corresponding voice data contained in the basic data table, the first graphic data identifier is defined as an identifier of corresponding graphic data contained in the basic data table, the second voice data identifier is defined as an identifier of voice data contained in the voice graphic identifier list, and an identifier of graphic data contained in the second graphic identifier voice graphic identifier list is defined as a second graphic data identifier;
the matching module is used for matching the first voice data identifier and the first graphic data identifier with the second voice data identifier and the second graphic data identifier respectively;
and the checking module is used for automatically checking the data matching according to the matching result.
7. The apparatus of claim 6, wherein the matching module comprises a first matching unit and a second matching unit; wherein:
the first matching unit: the voice recognition device is used for matching the first voice data identifier with the second voice data identifier by taking the second voice data identifier as a standard; taking a first voice data identifier which is not matched with the second voice data identifier and a second voice data identifier which is not matched with the first voice data identifier as matching results;
the second matching unit is used for matching the first graphic data identifier with the second graphic data identifier by taking the second graphic data identifier as a standard; taking a first graphic data identifier which is not matched with the second graphic data identifier and a second graphic data identifier which is not matched with the first graphic data identifier as matching results;
alternatively, the first and second electrodes may be,
the first matching unit: the voice recognition device is used for matching the second voice data identifier with the first voice data identifier by taking the first voice data identifier as a standard; taking a second voice data identifier which is not matched with the first voice data identifier and a first voice data identifier which is not matched with the second voice data identifier as matching results;
the second matching unit is used for matching the second graphic data identifier with the first graphic data identifier by taking the first graphic data identifier as a standard; and taking the second graphic data identifier which is not matched with the first graphic data identifier and the first graphic data identifier which is not matched with the second graphic data identifier as matching results.
8. The apparatus of claim 6 or 7, wherein the inspection module is to:
when a first voice data identifier which is not matched with a second voice data identifier, a second voice data identifier which is not matched with the first voice data identifier, a first graph data identifier which is not matched with a second graph data identifier and a second graph data identifier which is not matched with the first graph data identifier are used as matching results, determining basic data associated with the first voice data identifier and basic data associated with the first graph data identifier according to the first voice data identifier which is not matched with the first voice data identifier and the first graph data identifier; deleting the determined basic data associated with the first voice data identifier and the determined basic data associated with the first graphic data identifier; determining basic data corresponding to the second voice data identifier and basic data corresponding to the second graph data identifier according to a second voice data identifier which is not matched with the first voice data identifier and a second graph data identifier which is not matched with the first graph data identifier; adding the determined basic data into a basic data table;
when a second voice data identifier which is not matched with the first voice data identifier, a first voice data identifier which is not matched with the second voice data identifier, a second graph data identifier which is not matched with the first graph data identifier and a first graph data identifier which is not matched with the second graph data identifier are used as matching results, determining voice data corresponding to the second voice data identifier and graph data corresponding to the second graph data identifier in a region for storing voice graph data according to the second voice data identifier which is not matched with the second voice data identifier and the second graph data identifier; deleting the determined voice data and the determined graphic data, and determining the voice data corresponding to the first voice data identifier and the graphic data corresponding to the first graphic data identifier according to the first voice data identifier which is not matched with the second voice data identifier and the first graphic data identifier which is not matched with the second graphic data identifier; the determined voice data and the graphic data are added to the area storing the voice graphic data.
9. The apparatus of any of claims 6 or 7, wherein the inspection module comprises: the device comprises a determining unit, a judging unit and a modifying unit; wherein:
the determination unit is configured to: determining voice data corresponding to a first voice data identifier and basic data associated with the first voice data identifier, graph data corresponding to the first graph data identifier and basic data associated with the first graph data identifier, voice data corresponding to a second voice data identifier and basic data corresponding to the second voice data identifier, graph data corresponding to the second graph data identifier and basic data corresponding to the second graph data identifier according to a first voice data identifier, a first graph data identifier, a second voice data identifier and a second graph data identifier which are not matched;
the judgment unit is used for: judging whether to add voice data corresponding to the first voice data identifier or delete basic data associated with the first voice data identifier according to preset standard data; judging whether to add the graphic data corresponding to the first graphic data identifier or delete the basic data associated with the first graphic data identifier according to preset standard data; according to preset standard data, judging whether to delete the voice data corresponding to the second voice data identifier or to add basic data corresponding to the second voice data identifier; according to preset standard data, judging whether to delete the graphic data corresponding to the second graphic data identifier or to add basic data corresponding to the second graphic data identifier;
the modification unit is configured to: and according to the judgment result, adding voice data corresponding to the first voice data identifier or deleting basic data associated with the first voice data identifier, adding first graphic data corresponding to the first graphic data identifier or deleting basic data associated with the first graphic data identifier, adding second voice data corresponding to the second voice data identifier or deleting basic data corresponding to the second voice data identifier, and adding second graphic data corresponding to the second graphic data identifier or deleting basic data corresponding to the second graphic data identifier.
10. The apparatus of any of claims 6 or 7, further comprising:
the map generation module is used for generating map data for electronic navigation according to the basic data, the second voice data and the second graphic data after the data matching automatic check is completed;
and the training module comprises an intelligent training model and a training library and is used for self-learning according to the graph and voice data stored in the training library and optimizing an automatic checking mode of data matching.
CN201610687167.9A 2016-08-18 2016-08-18 Automatic checking method and device for data matching Active CN107766367B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610687167.9A CN107766367B (en) 2016-08-18 2016-08-18 Automatic checking method and device for data matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610687167.9A CN107766367B (en) 2016-08-18 2016-08-18 Automatic checking method and device for data matching

Publications (2)

Publication Number Publication Date
CN107766367A CN107766367A (en) 2018-03-06
CN107766367B true CN107766367B (en) 2021-09-07

Family

ID=61261586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610687167.9A Active CN107766367B (en) 2016-08-18 2016-08-18 Automatic checking method and device for data matching

Country Status (1)

Country Link
CN (1) CN107766367B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2672401A1 (en) * 2012-06-06 2013-12-11 Samsung Electronics Co., Ltd Method and apparatus for storing image data
CN103761249A (en) * 2013-12-24 2014-04-30 北京恒华伟业科技股份有限公司 Data importing method and system based on data matching
CN105320664A (en) * 2014-06-12 2016-02-10 北京四维图新科技股份有限公司 Method and device for modifying electronic map relational data
CN105426372A (en) * 2014-09-17 2016-03-23 高德软件有限公司 Electronic map data manufacturing and updating method and apparatus

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9898578B2 (en) * 2003-04-04 2018-02-20 Agilent Technologies, Inc. Visualizing expression data on chromosomal graphic schemes
US8839140B2 (en) * 2008-05-23 2014-09-16 Microsoft Corporation Pivot search results by time and location
CN101998238B (en) * 2010-10-28 2014-03-05 中国联合网络通信集团有限公司 Positioning navigation method, map query system and positioning navigation system
US9625612B2 (en) * 2013-09-09 2017-04-18 Google Inc. Landmark identification from point cloud generated from geographic imagery data
CN103475731A (en) * 2013-09-23 2013-12-25 网易(杭州)网络有限公司 Media information matching and processing method and device
CN104236563A (en) * 2014-09-17 2014-12-24 沈阳美行科技有限公司 Method for recording actual road conditions during running of automobile through reality navigation
US9928302B2 (en) * 2014-11-10 2018-03-27 International Business Machines Corporation Merging data analysis paths

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2672401A1 (en) * 2012-06-06 2013-12-11 Samsung Electronics Co., Ltd Method and apparatus for storing image data
CN103761249A (en) * 2013-12-24 2014-04-30 北京恒华伟业科技股份有限公司 Data importing method and system based on data matching
CN105320664A (en) * 2014-06-12 2016-02-10 北京四维图新科技股份有限公司 Method and device for modifying electronic map relational data
CN105426372A (en) * 2014-09-17 2016-03-23 高德软件有限公司 Electronic map data manufacturing and updating method and apparatus

Also Published As

Publication number Publication date
CN107766367A (en) 2018-03-06

Similar Documents

Publication Publication Date Title
CN107590123B (en) Vehicular middle-location context reference resolution method and device
US11182544B2 (en) User interface for contextual document recognition
CN106528762B (en) Electronic map processing method and processing system for identifying interest points
CN104317909A (en) Method and device for verifying data of points of interest
CN109783589B (en) Method, device and storage medium for resolving address of electronic map
CN104537102A (en) Positive geocoding service method and system for obtaining longitude and latitude
EP3889797A1 (en) Database index and database query processing method, apparatus, and device
CN113626455A (en) Method and device for updating picture library in linkage manner, electronic equipment and storage medium
KR102184048B1 (en) System and method for checking of information about estate development plan based on geographic information system
CN103235757B (en) Several apparatus and method that input domain tested object is tested are made based on robotization
CN113743080A (en) Hierarchical address text similarity comparison method, device and medium
CN107247716B (en) Method and device for increasing electronic eye information, navigation chip and server
CN107766367B (en) Automatic checking method and device for data matching
CN113010169A (en) Method and apparatus for converting UI diagram into code file
CN110990651B (en) Address data processing method and device, electronic equipment and computer readable medium
CN111427977B (en) Electronic eye data processing method and device
CN111460084A (en) Resume structured extraction model training method and system
CN111198910A (en) Data fusion method and device
CN106443732B (en) Path diagram drawing method and system based on GPS
CN111063003B (en) Mine distribution diagram manufacturing method and system
CN108920749B (en) Pipeline two-dimensional and three-dimensional data updating method and device and computer readable storage medium
CN115641430B (en) Method, device, medium and computer equipment for determining interest surface
CN111435450A (en) Road data processing method and device
CN111198912A (en) Address data processing method and device
CN110941609B (en) Multi-dimensional searching method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant