CN116612224B - Visual management system of digital mapping - Google Patents

Visual management system of digital mapping Download PDF

Info

Publication number
CN116612224B
CN116612224B CN202310875659.0A CN202310875659A CN116612224B CN 116612224 B CN116612224 B CN 116612224B CN 202310875659 A CN202310875659 A CN 202310875659A CN 116612224 B CN116612224 B CN 116612224B
Authority
CN
China
Prior art keywords
image information
article
data
visual
continuous
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310875659.0A
Other languages
Chinese (zh)
Other versions
CN116612224A (en
Inventor
元济勇
张红霞
王英石
熊云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Xintiandi Intelligent Engineering Co ltd
Original Assignee
Shandong Xintiandi Intelligent Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Xintiandi Intelligent Engineering Co ltd filed Critical Shandong Xintiandi Intelligent Engineering Co ltd
Priority to CN202310875659.0A priority Critical patent/CN116612224B/en
Publication of CN116612224A publication Critical patent/CN116612224A/en
Application granted granted Critical
Publication of CN116612224B publication Critical patent/CN116612224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application relates to a digital mapping visual management system. The digital mapping visual management system comprises the following method steps of obtaining continuous article image information entering a visual space; the continuous article image information is continuous image information on a continuous path; the continuous image information comprises initial image information and ending image information; the termination image information includes its coordinate information in the visualization space; obtaining item type data based on the successive item image information; the item type data includes identifiable item type data and unidentifiable item type data; obtaining an article model based on the article type data, and presenting the article model in a preset mode in a visual space; the digital mapping visual management system can more conveniently obtain the virtual visual space corresponding to the real asset, so that the corresponding asset of the user is displayed in the visual space in a visual three-dimensional state, and the user can manage and use conveniently.

Description

Visual management system of digital mapping
Technical Field
The application belongs to the technical field of visual processing, and particularly relates to a digital mapping visual management system.
Background
Asset management is an important link in enterprise operation, and has important significance for improving material utilization rate, reducing cost and improving efficiency. The traditional asset management mode is mainly carried out by means of manual recording, marking, checking and the like, and the method is time-consuming and labor-consuming and is easy to cause data errors and the like. With the continuous development of modern technologies, applications such as 3D visualization technology, sensor technology, cloud computing technology, etc., asset management is realized more intelligently and efficiently.
Conventional 3D visualization techniques typically require that a three-dimensional model of the object be built in advance and imported into the system for management. For spaces temporarily stored in warehouses and the like, it is indeed very cumbersome to collect various object models that may enter the space. Thus, to address this problem, there is a need to explore more intelligent, automated asset management techniques.
Disclosure of Invention
The application aims to solve the problems and provide a digital mapping visual management system with simple structure and reasonable design.
The application realizes the above purpose through the following technical scheme:
the first aspect of the present application provides a digital mapping visual management system comprising the method steps of obtaining continuous item image information into a visual space; the continuous article image information is continuous image information on a continuous path; the continuous image information comprises initial image information and ending image information; the termination image information comprises coordinate information of the termination image information in a visual space; obtaining item type data based on the successive item image information; the article type data includes identifiable article type data and unrecognizable article type data; and obtaining an article model based on the article type data, and presenting the article model in a preset mode in a visual space.
As a further optimization scheme of the application, when the object type data is unrecognizable object type data, identifiable object contour data and masked object contour data in the object type data are obtained, virtual object contour data are obtained based on the relevance of the masked object contour data and the identifiable object contour data, object model data are obtained based on the virtual object contour data and the identifiable object contour data, the object model is generated based on the object model data, and the object model is displayed in a preset mode in a visual space.
As a further optimization scheme of the application, the continuous article image information is article dynamic moving image data collected at a fixed position and at a fixed side, the article dynamic moving image data comprises initial image information and termination image information, the initial image information is first image data entering a collection end in the article dynamic moving image data, and the termination image information is last image data leaving the collection end in the article dynamic moving image data.
As a further optimization scheme of the application, contour point data of an object is obtained based on object dynamic moving image data, an identifiable object contour data set and a covered object contour data set are judged and obtained based on the object contour point data, an input feature set corresponding to the covered object contour data set is obtained based on the identifiable object contour data set and the covered object contour data set, an offset set corresponding to the covered object contour data set is obtained in a neural network model based on the input feature set, and covered object contour data is corrected based on the offset set, so that object model data is obtained.
As a further preferred embodiment of the present application, the input feature set includes an input feature corresponding to each point in the masked article contour data set, the input feature being in a matrix form, a first row of which corresponds to a position code of the input feature, and other rows of which correspond to position data of each point in the identifiable article contour data set and to position data of each point in the masked article contour data set.
As a further optimization scheme of the application, the neural circulation network model comprises an LSTM unit.
As a further optimization scheme of the present application, the operation of the t-th time step LSTM unit includes: the calculation formula of the forgetting gate of the t-th time step LSTM unit is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein->Input feature representing item profile data point masked by t +.>Representing the output of the LSTM unit at the t-1 time step; />Representation ofTransfer to the corresponding->Weight matrix of>Representation->Transfer to->Corresponding weight matrix, < >>Representing a first bias term; by activating the function->(sigmoid function) will->The calculation result of (1) is defined between (0, 1); the calculation formula of the input gate of the t-th time step LSTM unit is as follows: />;/>Representation->Input is passed to +.>Corresponding weight matrix, < >>Representation->Transfer to->Corresponding toWeight matrix (W/W)>Representing a second bias term; by activating the function->Will beThe calculation result of (1) is defined between (0, 1); intermediate state of the t-th time step LSTM cell->Can be expressed as follows:;/>representation input +.>Transfer to->Corresponding weight matrix, < >>Representation->Transfer to->Corresponding weight matrix, < >>Representing a third bias term; by activating the function->Will->Is defined between (-1, 1); output form of the t-th time step LSTM cellStatus->Expressed by the following formula: />The method comprises the steps of carrying out a first treatment on the surface of the Wherein->Is the output state of the t-1 time step LSTM,>、/>、/>the calculation result of the forget gate, the input gate and the intermediate state of the LSTM unit in the t-th time step; />Indicating forgetfulness door->And the output state of the t-1 th time step LSTM +.>Point-by-point multiplication is performed to make->;/>Representing I/O gate->And intermediate state->Performing point-by-point multiplication; />The method comprises the steps of carrying out a first treatment on the surface of the The output gate of the t-th time step LSTM cell is expressed as: />The method comprises the steps of carrying out a first treatment on the surface of the Wherein->Representation input +.>Transfer to->Corresponding weight matrix, < >>Representation->Transfer to->Corresponding weight matrix, < >>Representing a fourth bias term; by activating the function->Defining the calculation result between (0, 1); output of the t-th time step LSTM cell>Can be expressed as follows:the method comprises the steps of carrying out a first treatment on the surface of the Output door->And->Multiplying point by point to obtain the output +.>The method comprises the steps of carrying out a first treatment on the surface of the Definition:,/>for the number of covered object contour data points, t=0, +.>,/>
As a further optimization scheme of the application, the neural circulation network model corresponds to three classifiers, and the three classifiers correspond to three-dimensional coordinate value offsets of covered object contour data points.
The application provides a digital mapping visual management system, which comprises a memory, a processor, an acquisition end and a visual end, wherein image data acquired by the acquisition end are input into the memory and the processor, a corresponding visual space is arranged in the visual end, the memory comprises a digital mapping visual management method program, and the digital mapping visual management method program realizes the following steps when being executed by the processor:
obtaining continuous article image information entering a visualization space; the continuous article image information is continuous image information on a continuous path; the continuous image information comprises initial image information and ending image information; the termination image information comprises coordinate information of the termination image information in a visual space; obtaining item type data based on the successive item image information; the article type data includes identifiable article type data and unrecognizable article type data; and obtaining an article model based on the article type data, and presenting the article model in a preset mode in a visual end.
As a further optimization scheme of the application, the collecting end further comprises a position sensor, wherein the position sensor is arranged in the actual space, and the position of the object in the actual space is obtained based on the position sensor.
The application has the beneficial effects that: the application can more conveniently obtain the virtual visual space corresponding to the real asset, so that the corresponding asset of the user is presented in the visual space in a visual three-dimensional state, and the management and the use of the user are convenient.
Drawings
FIG. 1 is a method flow diagram of a digital map visualization management method of the present application;
FIG. 2 is a detailed view of the construction of continuous image information in the digital map visual management method of the present application;
FIG. 3 is a system block diagram of the digital map visualization management system of the present application;
fig. 4 is a system configuration diagram of the digital map visual management system of the present application.
Detailed Description
The present application will be described in further detail with reference to the accompanying drawings, wherein it is to be understood that the following detailed description is for the purpose of further illustrating the application only and is not to be construed as limiting the scope of the application, as various insubstantial modifications and adaptations of the application to those skilled in the art can be made in light of the foregoing disclosure.
Example 1
As shown in fig. 1 and 2, a digital map visualization management system, comprising the following method steps,
step S102, obtaining continuous object image information entering a visual space;
wherein the continuous article image information is continuous image information on a continuous path; the continuous image information comprises initial image information and ending image information; the termination image information comprises coordinate information of the termination image information in a visual space;
step S104, obtaining article type data based on the continuous article image information; the article type data includes identifiable article type data and unrecognizable article type data;
step S106, an article model is obtained based on the article type data, and the article model is presented in a preset mode in a visual space.
In this embodiment, a visual virtual space corresponding to the field space is generally established first, and the visual virtual space is utilized to facilitate three-dimensional visual management of the own asset, and generally, a three-dimensional model of an object entering a certain field space is quickly obtained by collecting data of the object and displayed in the visual space; however, in practical use, not all spaces are provided with all-round acquisition sensors so as to obtain more specific object data, thus modeling is realized, for many spaces such as warehouses, transfer warehouses, temporary warehouses, vending warehouses and the like which are temporarily stored at present, no such sound acquisition sensors exist, and also, for the corresponding efficiency of goods entering and exiting, the covering cloth or protective facilities of the goods are not removed, so that the three-dimensional display processing is performed on the identifiable object types and the unrecognizable object types, wherein the identifiable object types comprise the existing models stored in the system, and the identifiable object types can be quickly generated after being obtained through feature recognition; meanwhile, the identifiable object type can be an object type without a shielding object, namely, the object is rapidly and accurately modeled so as to be displayed in a three-dimensional space; the unrecognizable item type may be an item type that is not stored within the system, and should also include an item type that has an occluded area.
Further, when the article type data is unrecognizable article type data, identifiable article profile data and masked article profile data in the article type data are obtained, virtual article profile data is obtained based on the correlation of the masked article profile data and the identifiable article profile data, article model data is obtained based on the virtual article profile data and the identifiable article profile data, the article model is generated based on the article model data, and the article model is presented in a preset mode in a visual space.
It should be noted that, in this embodiment, when there is a recognizable area and an occluded covered area in one article, the above scheme may be adopted; of course, if the identifiable region is 0, the solution can be triggered to generate the object model; however, when the identifiable region and the shielded and covered region exist in one article, the virtual article contour data of the generated article can have relevance with the identifiable article contour data extracted from the identifiable region, so that the authenticity and the accuracy of the article model are improved.
Specifically, the continuous article image information is article dynamic moving image data collected at a fixed position and on a fixed side, the article dynamic moving image data comprises initial image information and termination image information, the initial image information is first image data entering a collection end in the article dynamic moving image data, and the termination image information is last image data leaving the collection end in the article dynamic moving image data.
It should be noted that, in this embodiment, the acquisition end adopts a mode of acquiring image data at a fixed position and at a fixed side, that is, in the case of a group of acquisition ends (cameras), a corresponding object model may be obtained as well, so in the preliminary judgment of image information of an article, all image data with the article should be focused more, where the image data includes the first image data and the last image data, the first image data is obtained by obtaining continuous images of a continuous article, judging an image of the article occupying the largest area on the continuous image based on the continuous images of the article, identifying image features of the article, including a moving speed of the article and position data of the article, the position data is obtained based on a field of view area of the acquisition end, and obtaining the first image data of the article initially entering the field of view area of the acquisition end based on the moving speed of the article and the number of image frames acquired by the acquisition end at a corresponding time.
Based on the obtained continuous image data, feature extraction and matching are performed on each frame of image to determine the position and posture change of the object between different frames. This can be achieved using feature point detection and matching algorithms such as SIFT, SURF, or ORB; and simultaneously extracting the outline, edge and other characteristic information of the object as far as possible, and initially establishing a rough three-dimensional model based on the information.
Specifically, contour point data of an object is obtained based on object dynamic moving image data, an identifiable object contour data set and a covered object contour data set are judged and obtained based on the object contour point data, an input feature set corresponding to the covered object contour data set is obtained based on the identifiable object contour data set and the covered object contour data set, an offset set corresponding to the covered object contour data set is obtained in a neural circulation network model based on the input feature set, and covered object contour data is corrected based on the offset set, so that object model data is obtained.
Specifically, classifying contour point data in a rough three-dimensional model to obtain correct contour point data and incorrect contour point data, wherein the correct contour point data forms an identifiable object contour data set, and the incorrect contour point data forms a covered object contour data set; it should also be added that the point data should be the correct contour point data for the protrusion points on the contour.
Further, the input feature set includes an input feature corresponding to each point in the masked article contour data set, the input feature being in a matrix form, a first row of which corresponds to a position code of the input feature, and other rows of which correspond to position data of each point in the identifiable article contour data set and to position data of each point in the masked article contour data set.
The neural network model includes an LSTM unit.
The operation of the LSTM unit of the t-th time step comprises the following steps: the calculation formula of the forgetting gate of the t-th time step LSTM unit is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein->Input feature representing item profile data point masked by t +.>Represents the t-1 time step LSTMAn output of the unit; />Representation->Transfer to the corresponding->Weight matrix of>Representation->Transfer to->Corresponding weight matrix, < >>Representing a first bias term; by activating the function->(sigmoid function) will->Is defined between (0, 1).
The calculation formula of the input gate of the t-th time step LSTM unit is as follows:;/>representation->Input is passed to +.>Corresponding weight matrix, < >>Representation->Transfer to->Corresponding weight matrix, < >>Representing a second bias term; by activating the function->Will->Is defined between (0, 1).
Intermediate state of the t-th time step LSTM cellCan be expressed as follows: />;/>Representation input +.>Transfer to->Corresponding weight matrix, < >>Representation->Transfer to->Corresponding weight matrix, < >>Representing a third bias term; by activating the function->Will->Is defined between (-1, 1).
Output state of the t-th time step LSTM cellExpressed by the following formula: />The method comprises the steps of carrying out a first treatment on the surface of the Wherein->Is the output state of the t-1 time step LSTM,>、/>、/>the calculation result of the forget gate, the input gate and the intermediate state of the LSTM unit in the t-th time step; />Indicating forgetfulness door->And the output state of the t-1 th time step LSTM +.>The multiplication is performed point by point,;/>representing I/O gate->And intermediate state->Performing point-by-point multiplication; />
The output gate of the t-th time step LSTM cell is expressed as:the method comprises the steps of carrying out a first treatment on the surface of the Wherein the method comprises the steps ofRepresentation input +.>Transfer to->Corresponding weight matrix, < >>Representation->Transfer to->Corresponding weight matrix, < >>Representing a fourth bias term; by activating the function->Will->The calculation result of (1) is defined between (0, 1); output of the t-th time step LSTM cell>Can be expressed as follows: />The method comprises the steps of carrying out a first treatment on the surface of the Output door->And->Multiplying point by point to obtain the output +.>
Definition:,/>for the number of covered object contour data points, t=0, +.>,/>
The neural circulation network model corresponds to three classifiers, and the three classifiers correspond to three-dimensional coordinate value offsets of covered object contour data points.
Based on the offset, the virtual article contour data is adjusted, final object model data is obtained based on the adjusted article contour data, and the object model is presented in an unmasked state in the visual space according to the position data in the termination image information.
Example 2
As shown in fig. 3 and fig. 4, a digital mapping visual management system 2 includes a memory 21, a processor 22, an acquisition end, and a visual end, where image data acquired by the acquisition end is input into the memory 21 and the processor 22, and a corresponding visual space is provided in the visual end, and the memory includes a digital mapping visual management method program, and when executed by the processor, the digital mapping visual management method program implements the following steps:
obtaining continuous article image information entering a visualization space; the continuous article image information is continuous image information on a continuous path; the continuous image information comprises initial image information and ending image information; the termination image information comprises coordinate information of the termination image information in a visual space; obtaining item type data based on the successive item image information; the article type data includes identifiable article type data and unrecognizable article type data; and obtaining an article model based on the article type data, and presenting the article model in a preset mode in a visual end.
The acquisition end further comprises a position sensor, wherein the position sensor is arranged in the actual space, and the position of the object in the actual space is obtained based on the position sensor.
In this embodiment, unlike embodiment 1, embodiment 1 obtains the final state of the article from the image, but in actual use, the final position of the article may be obtained by modeling first and using other means or sensors.
When the digital mapping visual management system is used, through the scheme in the embodiment, the virtual visual space corresponding to the real asset can be more conveniently obtained, so that the corresponding asset of the user is displayed in the visual space in a visual three-dimensional state, and the user can manage and use conveniently.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present application may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.

Claims (7)

1. A digital mapping visual management system, characterized in that the system comprises the method steps of obtaining continuous article image information into a visual space; the continuous article image information is continuous image information on a continuous path; the continuous image information comprises initial image information and ending image information; the termination image information comprises coordinate information of the termination image information in a visual space; obtaining item type data based on the successive item image information; the article type data includes identifiable article type data and unrecognizable article type data; obtaining an article model based on the article type data, and presenting the article model in a preset mode in a visual space; when the object type data is unrecognizable object type data, identifiable object contour data and masked object contour data in the object type data are obtained, virtual object contour data are obtained based on the relevance of the masked object contour data and the identifiable object contour data, object model data are obtained based on the virtual object contour data and the identifiable object contour data, the object model is generated based on the object model data, and the object model is displayed in a preset mode in a visual space; the continuous article image information is article dynamic moving image data which is acquired at a fixed position and at a fixed side surface, the article dynamic moving image data comprises initial image information and termination image information, the initial image information is first image data which enters an acquisition end in the article dynamic moving image data, and the termination image information is last image data which leaves the acquisition end in the article dynamic moving image data; and obtaining contour point data of the object based on the object dynamic moving image data, judging and obtaining an identifiable object contour data set and a covered object contour data set based on the object contour point data, obtaining an input feature set corresponding to the covered object contour data set based on the identifiable object contour data set and the covered object contour data set, obtaining an offset set corresponding to the covered object contour data set in the nerve circulation network model based on the input feature set, correcting the covered object contour data based on the offset set, and obtaining object model data.
2. A digital map visualization management system as recited in claim 1, wherein: the input feature set includes an input feature corresponding to each point in the masked article contour data set, the input feature being in the form of a matrix, a first row of which corresponds to the position code of the input feature, and other rows of which correspond to the position data of each point in the identifiable article contour data set and to the position data of each point in the masked article contour data set.
3. A digital map visualization management system as recited in claim 2, wherein: the neural network model includes an LSTM unit.
4. A digital map visualization management system as recited in claim 3, wherein: the operation of the LSTM unit of the t-th time step comprises the following steps:
the calculation formula of the forgetting gate of the t-th time step LSTM unit is as follows:
wherein the method comprises the steps ofInput feature representing item profile data point masked by t +.>Representing the output of the LSTM unit at the t-1 time step; />Representation->Transfer to the corresponding->Weight matrix of>Representation->Transfer to->Corresponding weight matrix, < >>Representing a first bias term; by activating the function->Will->The calculation result of (1) is defined between (0, 1);
the calculation formula of the input gate of the t-th time step LSTM unit is as follows:
representation->Input is passed to +.>Corresponding weight matrix, < >>Representation->Transfer to->Corresponding weight matrix, < >>Representing a second bias term; by activating the function->Will->The calculation result of (1) is defined between (0, 1);
intermediate state of the t-th time step LSTM cellCan be expressed as follows:
representation input +.>Transfer to->Corresponding weight matrix, < >>Representation->Transfer to->Corresponding weight matrix, < >>Representing a third bias term; by activating the function->Will->Is defined between (-1, 1);
output state of the t-th time step LSTM cellExpressed by the following formula:
wherein the method comprises the steps ofIs the output state of the t-1 time step LSTM,>、/>、/>the calculation result of the forget gate, the input gate and the intermediate state of the LSTM unit in the t-th time step;
indicating forgetfulness door->And the output state of the t-1 th time step LSTM +.>The multiplication is performed point by point,
representing I/O gate->And intermediate state->Performing point-by-point multiplication; />
The output gate of the t-th time step LSTM cell is expressed as:
wherein the method comprises the steps ofRepresentation input +.>Transfer to->Corresponding weight matrix, < >>Representation->Transfer to->Corresponding weight matrix, < >>Representing a fourth bias term; by activating the function->Defining the calculation result between (0, 1);
output of the t-th time step LSTM cellCan be expressed as follows:
will export doorAnd->Multiplying point by point to obtain the output +.>
Definition:,/>for the number of covered object contour data points, t=0, +.>,/>
5. The digital map visualization management system of claim 4, wherein: the neural circulation network model corresponds to three classifiers, and the three classifiers correspond to three-dimensional coordinate value offsets of covered object contour data points.
6. The digital mapping visual management system is characterized by comprising a memory, a processor, an acquisition end and a visual end, wherein image data acquired by the acquisition end are input into the memory and the processor, a corresponding visual space is arranged in the visual end, the memory comprises the digital mapping visual management method program as claimed in claim 1, and the digital mapping visual management method program realizes the following steps when being executed by the processor:
obtaining continuous article image information entering a visualization space; the continuous article image information is continuous image information on a continuous path; the continuous image information comprises initial image information and ending image information; the termination image information comprises coordinate information of the termination image information in a visual space; obtaining item type data based on the successive item image information; the article type data includes identifiable article type data and unrecognizable article type data; and obtaining an article model based on the article type data, and presenting the article model in a preset mode in a visual end.
7. The digital map visualization management system of claim 6, wherein: the acquisition end further comprises a position sensor, wherein the position sensor is arranged in the actual space, and the position of the object in the actual space is obtained based on the position sensor.
CN202310875659.0A 2023-07-18 2023-07-18 Visual management system of digital mapping Active CN116612224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310875659.0A CN116612224B (en) 2023-07-18 2023-07-18 Visual management system of digital mapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310875659.0A CN116612224B (en) 2023-07-18 2023-07-18 Visual management system of digital mapping

Publications (2)

Publication Number Publication Date
CN116612224A CN116612224A (en) 2023-08-18
CN116612224B true CN116612224B (en) 2023-10-13

Family

ID=87682146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310875659.0A Active CN116612224B (en) 2023-07-18 2023-07-18 Visual management system of digital mapping

Country Status (1)

Country Link
CN (1) CN116612224B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9996890B1 (en) * 2017-07-14 2018-06-12 Synapse Technology Corporation Detection of items
CN112005244A (en) * 2019-11-15 2020-11-27 深圳市微蓝智能科技有限公司 Article management method, terminal device, article management apparatus, and storage medium
WO2021194413A1 (en) * 2020-03-27 2021-09-30 Ascent Solutions Pte Ltd Asset monitoring system
CN113505046A (en) * 2021-05-31 2021-10-15 云聚数据科技(上海)有限公司 Three-dimensional visual data center monitoring management system and method
CN115221794A (en) * 2022-07-30 2022-10-21 深圳市叁玖模型制作有限公司 Asset processing method for converting assets into digital assets
CN116310918A (en) * 2023-02-16 2023-06-23 东易日盛家居装饰集团股份有限公司 Indoor key object identification and positioning method, device and equipment based on mixed reality
CN116385779A (en) * 2023-03-24 2023-07-04 西安理工大学 Cloud edge end-based architecture and method for identifying articles in image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9996890B1 (en) * 2017-07-14 2018-06-12 Synapse Technology Corporation Detection of items
CN112005244A (en) * 2019-11-15 2020-11-27 深圳市微蓝智能科技有限公司 Article management method, terminal device, article management apparatus, and storage medium
WO2021092883A1 (en) * 2019-11-15 2021-05-20 深圳市微蓝智能科技有限公司 Article management method, terminal apparatus, article management device, and storage medium
WO2021194413A1 (en) * 2020-03-27 2021-09-30 Ascent Solutions Pte Ltd Asset monitoring system
CN113505046A (en) * 2021-05-31 2021-10-15 云聚数据科技(上海)有限公司 Three-dimensional visual data center monitoring management system and method
CN115221794A (en) * 2022-07-30 2022-10-21 深圳市叁玖模型制作有限公司 Asset processing method for converting assets into digital assets
CN116310918A (en) * 2023-02-16 2023-06-23 东易日盛家居装饰集团股份有限公司 Indoor key object identification and positioning method, device and equipment based on mixed reality
CN116385779A (en) * 2023-03-24 2023-07-04 西安理工大学 Cloud edge end-based architecture and method for identifying articles in image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于资产类模型和空间信息模型的输电线路三维可视化平台;杨成顺;杨中亚;黄宵宁;;电测与仪表(第23期);全文 *

Also Published As

Publication number Publication date
CN116612224A (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN108090433B (en) Face recognition method and device, storage medium and processor
CN111898547B (en) Training method, device, equipment and storage medium of face recognition model
CN112949565B (en) Single-sample partially-shielded face recognition method and system based on attention mechanism
CN109376631B (en) Loop detection method and device based on neural network
CN109284733B (en) Shopping guide negative behavior monitoring method based on yolo and multitask convolutional neural network
US20210264144A1 (en) Human pose analysis system and method
CN104240264B (en) The height detection method and device of a kind of moving object
WO2021196389A1 (en) Facial action unit recognition method and apparatus, electronic device, and storage medium
CN106599836A (en) Multi-face tracking method and tracking system
CN111310662B (en) Flame detection and identification method and system based on integrated deep network
Pound et al. A patch-based approach to 3D plant shoot phenotyping
WO2021238664A1 (en) Method and device for capturing information, and method, device, and system for measuring level of attention
CN110879982A (en) Crowd counting system and method
CN111738344A (en) Rapid target detection method based on multi-scale fusion
CN110555908A (en) three-dimensional reconstruction method based on indoor moving target background restoration
CN113139489B (en) Crowd counting method and system based on background extraction and multi-scale fusion network
WO2023151237A1 (en) Face pose estimation method and apparatus, electronic device, and storage medium
CN107704797A (en) Real-time detection method and system and equipment based on pedestrian in security protection video and vehicle
CN109190639A (en) A kind of vehicle color identification method, apparatus and system
CN113850183A (en) Method for judging behaviors in video based on artificial intelligence technology
Palaniswamy et al. Automatic identification of landmarks in digital images
CN114359172A (en) Cigarette carton multi-face detection and identification method and system under stock or display scene
CN113221812A (en) Training method of face key point detection model and face key point detection method
CN116612224B (en) Visual management system of digital mapping
CN113743382B (en) Shelf display detection method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant