CN111198994A - Object identification processing method, device, equipment and medium based on video network - Google Patents

Object identification processing method, device, equipment and medium based on video network Download PDF

Info

Publication number
CN111198994A
CN111198994A CN201911383579.3A CN201911383579A CN111198994A CN 111198994 A CN111198994 A CN 111198994A CN 201911383579 A CN201911383579 A CN 201911383579A CN 111198994 A CN111198994 A CN 111198994A
Authority
CN
China
Prior art keywords
data set
data
object identifier
video
attribute name
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911383579.3A
Other languages
Chinese (zh)
Other versions
CN111198994B (en
Inventor
杜迎锋
陈婷
陈辉
王艳辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201911383579.3A priority Critical patent/CN111198994B/en
Publication of CN111198994A publication Critical patent/CN111198994A/en
Application granted granted Critical
Publication of CN111198994B publication Critical patent/CN111198994B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques

Abstract

The embodiment of the invention provides an object identification processing method and device based on a video network. Obtaining a first data set comprising a first object identification and corresponding target data and a second data set comprising an index value; acquiring a third data set comprising a second object identifier and a corresponding object attribute name; generating a fourth data set comprising the third object identification and the corresponding object attribute name; generating a fifth data set comprising object attribute names corresponding to the third object identifications and corresponding target data corresponding to the first object identifications which are the same as the third object identifications; convert the fifth data set into data object for the target data that the object identification corresponds converts into data object's object attribute, converts to and compares in the object identification object attribute name of changing the processing more, has improved the processing speed to data, can carry out the bulk operation to data object moreover, and the front-end personnel of being convenient for discern data, have promoted developer's development efficiency.

Description

Object identification processing method, device, equipment and medium based on video network
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and an apparatus for processing an object identifier based on a video network, an electronic device, and a computer-readable storage medium.
Background
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
When using a private Network Management Protocol of SNMP (Simple Network Management Protocol), data acquired from a Network device of a video Network is a corresponding relationship between a large number of OIDs (Object identifiers) and values, and the OIDs that are difficult to understand are processed when a service is implemented in application development, which greatly affects the processing speed of the data, and front-end personnel are difficult to identify the OIDs, and the development efficiency of front-end and back-end developers is low.
Disclosure of Invention
The embodiment of the invention discloses an object identification processing method based on a video network, an object identification processing device based on the video network, electronic equipment and a computer readable storage medium.
In a first aspect, an embodiment of the present invention shows an object identifier processing method based on a video network, where the video network includes a video network device and a video network terminal, and the method is applied to the video network terminal, and the method includes:
obtaining a first data set comprising at least one first object identification and corresponding target data, and a second data set comprising at least one index value of the target data, from the video networking network device; wherein the first object identifier comprises an index value;
acquiring a third data set which comprises at least one second object identifier and a corresponding object attribute name and is locally stored by the video network terminal;
generating a fourth data set comprising at least one third object identification and a corresponding object attribute name according to the third data set and the second data set; wherein the at least one third object identifier is respectively composed of each second object identifier in the third data set and each index value in the second data set, and the object attribute name corresponding to the third object identifier is the object attribute name corresponding to the second object identifier composing the third object identifier;
generating a fifth data set comprising an object attribute name corresponding to at least one third object identifier and corresponding target data corresponding to a first object identifier which is the same as the third object identifier according to the first data set and the fourth data set;
converting the fifth set of data into data objects.
In an alternative implementation, the obtaining a first data set including at least one first object identifier and corresponding target data from the network-of-view device includes:
obtaining at least one first object identification and corresponding target data from the video networking network device;
and storing the acquired at least one first object identifier and corresponding target data into the first data set.
In an optional implementation, the obtaining a second data set including at least one index value of target data from the video networking network device includes:
and acquiring a second data set of a data table storing the target data in a management information base on the video network equipment.
In an optional implementation manner, the generating, according to the first data set and the fourth data set, a fifth data set including an object attribute name corresponding to at least one third object identifier and corresponding target data corresponding to a first object identifier that is the same as the third object identifier includes:
and taking at least one object attribute name in the fourth data set as a key of a key value pair in the fifth data set, and taking at least one target data in the first data set as a value of the key value pair in the fifth data set, wherein a third object identifier corresponding to the object attribute name in the same key value pair is the same as a first object identifier corresponding to the target data.
In an alternative implementation, the first object identifier, the second object identifier, and the third object identifier are OIDs.
In a second aspect, an embodiment of the present invention shows an object identifier processing apparatus based on a video network, where the video network includes a video network device and a video network terminal, and the apparatus is applied to the video network terminal, and the apparatus includes:
a device set acquisition module for acquiring a first data set including at least one first object identifier and corresponding target data, and a second data set including at least one index value of the target data from the video networking network device; wherein the first object identifier comprises an index value;
the local set acquisition module is used for acquiring a third data set which is locally stored by the video network terminal and comprises at least one second object identifier and a corresponding object attribute name;
a fourth set generating module, configured to generate a fourth data set including at least one third object identifier and a corresponding object attribute name according to the third data set and the second data set; wherein the at least one third object identifier is respectively composed of each second object identifier in the third data set and each index value in the second data set, and the object attribute name corresponding to the third object identifier is the object attribute name corresponding to the second object identifier composing the third object identifier;
a fifth set generating module, configured to generate a fifth data set including an object attribute name corresponding to at least one third object identifier and corresponding target data corresponding to a first object identifier that is the same as the third object identifier, according to the first data set and the fourth data set;
and the object conversion module is used for converting the fifth data set into a data object.
In an optional implementation manner, the device set obtaining module includes:
the data acquisition submodule is used for acquiring at least one first object identifier and corresponding target data from the video network equipment;
and the data storage submodule is used for storing the acquired at least one first object identifier and the corresponding target data into the first data set.
In an optional implementation manner, the device set obtaining module includes:
and the set acquisition submodule is used for acquiring a second data set of a data table for storing the target data in a management information base on the video networking network equipment.
In an optional implementation manner, the fifth set generating module includes:
and the key value pair processing submodule is used for taking at least one object attribute name in the fourth data set as a key of a key value pair in the fifth data set and taking at least one target data in the first data set as a value of the key value pair in the fifth data set, wherein a third object identifier corresponding to the object attribute name in the same key value pair is the same as a first object identifier corresponding to the target data.
In an alternative implementation, the first object identifier, the second object identifier, and the third object identifier are OIDs.
In a third aspect, the embodiment of the present invention shows an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the processor implements the processing method for object identification based on the internet of view.
In a fourth aspect, the embodiment of the present invention shows a computer-readable storage medium, on which a computer program is stored, the computer program enabling a processor to execute the method for processing object identification based on the internet of view.
The embodiment of the invention has the following advantages:
in the application, a first data set comprising at least one first object identifier and corresponding target data and a second data set comprising at least one index value of the target data are obtained from the video networking network device; wherein the first object identifier comprises an index value; acquiring a third data set which comprises at least one second object identifier and a corresponding object attribute name and is locally stored by the video network terminal; generating a fourth data set comprising at least one third object identification and a corresponding object attribute name according to the third data set and the second data set; wherein the at least one third object identifier is respectively composed of each second object identifier in the third data set and each index value in the second data set, and the object attribute name corresponding to the third object identifier is the object attribute name corresponding to the second object identifier composing the third object identifier; generating a fifth data set comprising an object attribute name corresponding to at least one third object identifier and corresponding target data corresponding to a first object identifier which is the same as the third object identifier according to the first data set and the fourth data set; and converting the fifth data set into a data object, so that target data corresponding to the object identifier is converted into an object attribute of the data object, the problem that the object identifier which is difficult to understand needs to be processed when the application development is used for realizing a service is avoided, the object attribute is converted into an object attribute name which is easier to process than the object identifier, the processing speed of the data is improved, the data object can be integrally operated, front-end personnel can conveniently identify the data, and the development efficiency of front-end and rear-end developers is improved.
Drawings
Fig. 1 is a block diagram of a video network according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating steps of an object identification processing method based on a video network according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of a data object conversion flow.
Fig. 4 is a flowchart illustrating steps of an object identification processing method based on a video network according to an embodiment of the present invention.
Fig. 5 is a block diagram of an object identification processing apparatus based on a video network according to an embodiment of the present invention.
Fig. 6 is a networking schematic diagram of a video network of the present invention.
Fig. 7 is a schematic diagram of a hardware structure of a node server according to the present invention.
Fig. 8 is a schematic diagram of a hardware structure of an access switch of the present invention.
Fig. 9 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a block diagram of a video network according to an embodiment of the present invention is shown, where the video network includes a video network device 01 and a video network terminal 02.
Referring to fig. 2, a flowchart illustrating steps of an object identifier processing method based on a video network according to an embodiment of the present invention is shown, where the method may be applied to the video network terminal 02 shown in fig. 1, and the method may specifically include the following steps:
step S11, obtaining a first data set including at least one first object identification and corresponding target data and a second data set including at least one index value of the target data from the video network equipment; wherein the first object identifier includes an index value.
In the embodiment of the present invention, the network device in the video network includes, but is not limited to, a cat terminal or other network devices in the video network, which is not limited in this embodiment of the present invention. The video network equipment can actively send data to the video network terminal, and can also passively send data according to the request of the video network terminal.
In the embodiment of the present invention, the object identifier is used to identify an object. The data acquired from the video network equipment is the corresponding relation between the first object identification and the target data, and a data set comprising at least one first object identification and the corresponding target data is recorded as a first data set. For example, when using private network management protocols based on SNMP, retrieving data from an internet of view network device is a large number of OID and value correspondences. And storing the acquired at least one first object identifier and corresponding target data into a first data set. A data set is a collection of data in memory.
In the embodiment of the invention, the index value of the target data in the data table is in the video network equipment, and the data set comprising at least one index value is recorded as the second data set. Wherein, the first object identification comprises an index value.
For example, the second data set of the data index acquired by the video network terminal from the video network device includes: 1.3.6.1.4.1.54120.1.3.1.7.1.1 is 1, where 1.3.6.1.4.1.54120.1.3.1.7.1.1 is the OID corresponding to the index value, and 1 is the index value. The first acquired data set comprises: 1.3.6.1.4.1.54120.1.3.1.7.1.3.1, where 1.3.6.1.4.1.54120.1.3.1.7.1.3.1 is the OID corresponding to the target data (i.e., the first object id), zhang is the value (i.e., the target data), and the last digit 1 of 1.3.6.1.4.1.54120.1.3.1.7.1.3.1 is the index value.
In an alternative implementation, the first object identifier, the second object identifier, and the third object identifier are OIDs. An OID is a globally unique value associated with an object that is used to unambiguously identify the object, ensuring that the object is properly located and managed in the communication processing. In general, OID is the identity card of the object in network communication.
Step S12, obtaining a third data set including at least one second object identifier and a corresponding object attribute name stored locally by the video network terminal.
In the embodiment of the present invention, the object attribute name refers to an attribute name of the data object. The corresponding relation between the second object identification and the object attribute name is locally stored in the video network terminal, and a data set comprising at least one second object identification and the corresponding object attribute name is recorded as a third data set. For example, the third set of data stored locally includes: the # Name 1.3.6.1.4.1.54120.1.3.1.7.1.3 is the Name, where 1.3.6.1.4.1.54120.1.3.1.7.1.3 is the OID (i.e., the second object id) corresponding to the object attribute Name, and the Name is the object attribute Name.
Step S13, according to the third data set and the second data set, generating a fourth data set including at least one third object identification and a corresponding object attribute name; wherein the at least one third object id is respectively composed of each second object id in the third data set and each index value in the second data set, and the object attribute name corresponding to the third object id is the object attribute name corresponding to the second object id composing the third object id.
In an embodiment of the present invention, a fourth data set may be generated from the third data set and the second data set. And recording a data set comprising at least one third object identifier and the corresponding object attribute name as a fourth data set. And the at least one third object identifier is respectively composed of each second object identifier in the third data set and each index value in the second data set, and the object attribute name corresponding to the third object identifier is the object attribute name corresponding to the second object identifier composing the third object identifier.
For example, the third set of data includes: 1.3.6.1.4.1.54120.1.3.1.7.1.3 (second object id) ═ Name (object attribute Name), the second data set includes: 1.3.6.1.4.1.54120.1.3.1.7.1.1 ═ 1 (index value), the second object id and the index value combine to yield a third object id: 1.3.6.1.4.1.54120.1.3.1.7.1.3.1, since the corresponding object attribute Name is Name, the fourth data set includes: 1.3.6.1.4.1.54120.1.3.1.7.1.3.1: the Name.
Step S14, generating a fifth data set including an object attribute name corresponding to at least one third object identifier and corresponding target data corresponding to the first object identifier that is the same as the third object identifier, according to the first data set and the fourth data set.
In an embodiment of the present invention, a fifth data set may be generated from the first data set and the fourth data set. And recording a data set comprising an object attribute name corresponding to at least one third object identifier and corresponding target data corresponding to a first object identifier which is the same as the third object identifier as a fifth data set, namely, the fifth data set comprises the corresponding relation between the object data name and the target data. For example, the fourth data set includes: 1.3.6.1.4.1.54120.1.3.1.7.1.3.1 (third object identification): name (object attribute Name), the first data set including: 1.3.6.1.4.1.54120.1.3.1.7.1.3.1 (first object id) ═ zhang san (target data), since the first object id is the same as the third object id, the object attribute name in the fourth data set and the target data in the first data set are taken, and the object attribute name and the corresponding target data are taken as the fifth data set.
Step S15, converting the fifth data set into data objects.
In the embodiment of the present invention, the fifth data set is a data set of object attribute names and corresponding target data in the memory, and the fifth data set is converted into a data object, and the target data is also converted into attribute information of the data object.
For example, as shown in the schematic diagram of the data object conversion flow shown in fig. 3, SNMP is a protocol for implementing the application layer of the network stack. The protocol enables information from different systems to be collected in a uniform manner. Although it can access the system in many ways, the information query method and the path related information are completely standardized. The transparent transmission module is used for transparent data transmission as the name implies, and the transparent transmission is that in the data transmission process, the length and the content of data of a sender and a receiver are completely consistent without any processing on the data, and the transparent transmission is equivalent to a data line or a serial port line. The v2v private protocol is a private network management protocol based on SNMP, is innovated in the aspects of a communication addressing mechanism, a communication node exchange mechanism, heterogeneous communication data fusion and the like aiming at the characteristics of a high-definition video transmission process, can realize bidirectional, real-time, safe, high-quality and large-scale transmission of high-definition video data, and has the characteristics of supporting multipoint concurrency, cross-level flexible networking, wide compatibility with a main communication protocol and the like. The method comprises the steps of obtaining a data index of the video network equipment, obtaining a corresponding relation set of local OID and object attribute names of the video network terminal, forming a new index by the obtained index and the local OID set, obtaining values from the set obtained by the video network equipment by using the new index, forming a new set by taking the local object attribute names as keys and the newly obtained values as values, and converting the set into a required data object.
In the application, a first data set comprising at least one first object identifier and corresponding target data and a second data set comprising at least one index value of the target data are obtained from the video networking network device; wherein the first object identifier comprises an index value; acquiring a third data set which comprises at least one second object identifier and a corresponding object attribute name and is locally stored by the video network terminal; generating a fourth data set comprising at least one third object identification and a corresponding object attribute name according to the third data set and the second data set; wherein the at least one third object identifier is respectively composed of each second object identifier in the third data set and each index value in the second data set, and the object attribute name corresponding to the third object identifier is the object attribute name corresponding to the second object identifier composing the third object identifier; generating a fifth data set comprising an object attribute name corresponding to at least one third object identifier and corresponding target data corresponding to a first object identifier which is the same as the third object identifier according to the first data set and the fourth data set; and converting the fifth data set into a data object, so that target data corresponding to the object identifier is converted into an object attribute of the data object, the problem that the object identifier which is difficult to understand needs to be processed when the application development is used for realizing a service is avoided, the object attribute is converted into an object attribute name which is easier to process than the object identifier, the processing speed of the data is improved, the data object can be integrally operated, front-end personnel can conveniently identify the data, and the development efficiency of front-end and rear-end developers is improved.
Referring to fig. 4, a flowchart illustrating steps of an object identifier processing method based on a video network according to an embodiment of the present invention is shown, where the method may be applied to the video network terminal 02 shown in fig. 1, and the method may specifically include the following steps:
step S21, at least one first object identification and corresponding target data are obtained from the video network device.
Step S22, storing the acquired at least one first object identifier and corresponding target data into the first data set.
Step S23, obtaining a second data set of the data table storing the target data in the management information base on the network device of the video network.
In the embodiment of the present invention, a Management Information Base (MIB), which defines data items that a managed device must store, operations allowed to be performed on each data item, and meanings thereof, that is, data variables such as control and status Information of the managed device accessible by a Management system, are stored in the MIB. The target data is stored in a data table, and the second data set of the data table is the set of index values of the target data.
Step S24, obtaining a third data set including at least one second object identifier and a corresponding object attribute name stored locally by the video network terminal.
Step S25, according to the third data set and the second data set, generating a fourth data set including at least one third object identification and a corresponding object attribute name; wherein the at least one third object id is respectively composed of each second object id in the third data set and each index value in the second data set, and the object attribute name corresponding to the third object id is the object attribute name corresponding to the second object id composing the third object id.
Step S26, using at least one object attribute name in the fourth data set as a key of a key value pair in the fifth data set, and using at least one target data in the first data set as a value of the key value pair in the fifth data set, where a third object identifier corresponding to the object attribute name in the same key value pair is the same as a first object identifier corresponding to the target data.
In the embodiment of the present invention, when the first object identifier is the same as the third object identifier, the object attribute name is taken from the fourth data set according to the third object identifier and is used as a key of a key-value pair in the fifth data set, and the target data is taken from the first data set according to the first object identifier and is used as a value of the key-value pair in the fifth data set.
Step S27, converting the fifth data set into data objects.
In the application, at least one first object identifier and corresponding target data are obtained from the video network equipment, the obtained at least one first object identifier and corresponding target data are stored in a first data set, a second data set of a data table storing the target data in a management information base on the video network equipment is obtained, a third data set locally stored by the video network terminal and comprising at least one second object identifier and corresponding object attribute names is obtained, and a fourth data set comprising at least one third object identifier and corresponding object attribute names is generated according to the third data set and the second data set; wherein the at least one third object identifier is composed of each second object identifier in the third data set and each index value in the second data set, the object attribute name corresponding to the third object identifier is the object attribute name corresponding to the second object identifier composing the third object identifier, at least one object attribute name in the fourth data set is used as the key of the key value pair in the fifth data set, at least one target data in the first data set is used as the value of the key value pair in the fifth data set, wherein the third object identifier corresponding to the object attribute name in the same key value pair is the same as the first object identifier corresponding to the target data, the fifth data set is converted into the data object, so that the target data corresponding to the object identifier is converted into the object attribute of the data object, the problem that the object identification which is difficult to understand needs to be processed when the business is realized in application development is avoided, the object attribute name which is easier to process than the object identification is converted, the data processing speed is improved, the data object can be integrally operated, the front-end personnel can conveniently identify the data, and the development efficiency of front-end and rear-end developers is improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 5, a block diagram of a structure of an object identifier processing apparatus based on a video network according to an embodiment of the present invention is shown, where the apparatus is applied to a video network terminal 02 shown in fig. 1, and the apparatus may specifically include the following modules:
a device set acquiring module 31, configured to acquire, from the video network device, a first data set including at least one first object identifier and corresponding target data, and a second data set including at least one index value of the target data; wherein the first object identifier comprises an index value;
a local set obtaining module 32, configured to obtain a third data set locally stored in the video networking terminal and including at least one second object identifier and a corresponding object attribute name;
a fourth set generating module 33, configured to generate a fourth data set including at least one third object identifier and a corresponding object attribute name according to the third data set and the second data set; wherein the at least one third object identifier is respectively composed of each second object identifier in the third data set and each index value in the second data set, and the object attribute name corresponding to the third object identifier is the object attribute name corresponding to the second object identifier composing the third object identifier;
a fifth set generating module 34, configured to generate a fifth data set including an object attribute name corresponding to at least one third object identifier and corresponding target data corresponding to a first object identifier that is the same as the third object identifier, according to the first data set and the fourth data set;
an object conversion module 35, configured to convert the fifth data set into a data object.
In an optional implementation manner, the device set obtaining module includes:
the data acquisition submodule is used for acquiring at least one first object identifier and corresponding target data from the video network equipment;
and the data storage submodule is used for storing the acquired at least one first object identifier and the corresponding target data into the first data set.
In an optional implementation manner, the device set obtaining module includes:
and the set acquisition submodule is used for acquiring a second data set of a data table for storing the target data in a management information base on the video networking network equipment.
In an optional implementation manner, the fifth set generating module includes:
and the key value pair processing submodule is used for taking at least one object attribute name in the fourth data set as a key of a key value pair in the fifth data set and taking at least one target data in the first data set as a value of the key value pair in the fifth data set, wherein a third object identifier corresponding to the object attribute name in the same key value pair is the same as a first object identifier corresponding to the target data.
In an alternative implementation, the first object identifier, the second object identifier, and the third object identifier are OIDs.
In the application, a first data set comprising at least one first object identifier and corresponding target data and a second data set comprising at least one index value of the target data are obtained from the video networking network device; wherein the first object identifier comprises an index value; acquiring a third data set which comprises at least one second object identifier and a corresponding object attribute name and is locally stored by the video network terminal; generating a fourth data set comprising at least one third object identification and a corresponding object attribute name according to the third data set and the second data set; wherein the at least one third object identifier is respectively composed of each second object identifier in the third data set and each index value in the second data set, and the object attribute name corresponding to the third object identifier is the object attribute name corresponding to the second object identifier composing the third object identifier; generating a fifth data set comprising an object attribute name corresponding to at least one third object identifier and corresponding target data corresponding to a first object identifier which is the same as the third object identifier according to the first data set and the fourth data set; and converting the fifth data set into a data object, so that target data corresponding to the object identifier is converted into an object attribute of the data object, the problem that the object identifier which is difficult to understand needs to be processed when the application development is used for realizing a service is avoided, the object attribute is converted into an object attribute name which is easier to process than the object identifier, the processing speed of the data is improved, the data object can be integrally operated, front-end personnel can conveniently identify the data, and the development efficiency of front-end and rear-end developers is improved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
An embodiment of the present invention further illustrates an electronic device, where the electronic device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the method for processing object identifiers based on a video network as shown in fig. 2 when executing the computer program.
An embodiment of the present invention also shows a computer-readable storage medium, on which a computer program is stored, where the computer program enables a processor to execute the object identification processing method based on the video network shown in fig. 2.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved the traditional Ethernet (Ethernet) to face the potentially huge first video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network Circuit Switching (Circuit Switching), the video networking technology adopts Packet Switching to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 6, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: server, exchanger (including Ethernet protocol conversion gateway), terminal (including various set-top boxes, code board, memory, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node server, access exchanger (including Ethernet protocol conversion gateway), terminal (including various set-top boxes, coding board, memory, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 7, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 8, the network interface module (downlink network interface module 301, uplink network interface module 302), switching engine module 303 and CPU module 304 are mainly included;
wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the incoming data packet of the CPU module 304 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues and may include two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 308 is configured by the CPU module 304, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
As shown in fig. 9, the system mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MACSA of the ethernet coordination gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The object identification processing method and device based on the video network provided by the invention are introduced in detail, and a specific example is applied in the text to explain the principle and the implementation of the invention, and the description of the above embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An object identification processing method based on video network is characterized in that the video network comprises video network equipment and a video network terminal, and the method is applied to the video network terminal and comprises the following steps:
obtaining a first data set comprising at least one first object identification and corresponding target data, and a second data set comprising at least one index value of the target data, from the video networking network device; wherein the first object identifier comprises an index value;
acquiring a third data set which comprises at least one second object identifier and a corresponding object attribute name and is locally stored by the video network terminal;
generating a fourth data set comprising at least one third object identification and a corresponding object attribute name according to the third data set and the second data set; wherein the at least one third object identifier is respectively composed of each second object identifier in the third data set and each index value in the second data set, and the object attribute name corresponding to the third object identifier is the object attribute name corresponding to the second object identifier composing the third object identifier;
generating a fifth data set comprising an object attribute name corresponding to at least one third object identifier and corresponding target data corresponding to a first object identifier which is the same as the third object identifier according to the first data set and the fourth data set;
converting the fifth set of data into data objects.
2. The method of claim 1, wherein obtaining a first data set comprising at least one first object identification and corresponding target data from the video networking network device comprises:
obtaining at least one first object identification and corresponding target data from the video networking network device;
and storing the acquired at least one first object identifier and corresponding target data into the first data set.
3. The method of claim 1, wherein obtaining the second data set comprising at least one index value of target data from the video networking network device comprises:
and acquiring a second data set of a data table storing the target data in a management information base on the video network equipment.
4. The method of claim 1, wherein generating a fifth data set comprising object attribute names corresponding to at least one third object id and corresponding target data corresponding to a first object id identical to the third object id according to the first data set and the fourth data set comprises:
and taking at least one object attribute name in the fourth data set as a key of a key value pair in the fifth data set, and taking at least one target data in the first data set as a value of the key value pair in the fifth data set, wherein a third object identifier corresponding to the object attribute name in the same key value pair is the same as a first object identifier corresponding to the target data.
5. The method of claim 1, wherein the first object identifier, the second object identifier, and the third object identifier are OIDs.
6. An object identification processing device based on video network, wherein the video network comprises video network equipment and a video network terminal, the method is applied to the video network terminal, and the device comprises:
a device set acquisition module for acquiring a first data set including at least one first object identifier and corresponding target data, and a second data set including at least one index value of the target data from the video networking network device; wherein the first object identifier comprises an index value;
the local set acquisition module is used for acquiring a third data set which is locally stored by the video network terminal and comprises at least one second object identifier and a corresponding object attribute name;
a fourth set generating module, configured to generate a fourth data set including at least one third object identifier and a corresponding object attribute name according to the third data set and the second data set; wherein the at least one third object identifier is respectively composed of each second object identifier in the third data set and each index value in the second data set, and the object attribute name corresponding to the third object identifier is the object attribute name corresponding to the second object identifier composing the third object identifier;
a fifth set generating module, configured to generate a fifth data set including an object attribute name corresponding to at least one third object identifier and corresponding target data corresponding to a first object identifier that is the same as the third object identifier, according to the first data set and the fourth data set;
and the object conversion module is used for converting the fifth data set into a data object.
7. The apparatus of claim 6, wherein the device set obtaining module comprises:
the data acquisition submodule is used for acquiring at least one first object identifier and corresponding target data from the video network equipment;
and the data storage submodule is used for storing the acquired at least one first object identifier and the corresponding target data into the first data set.
8. The apparatus of claim 6, wherein the device set obtaining module comprises:
and the set acquisition submodule is used for acquiring a second data set of a data table for storing the target data in a management information base on the video networking network equipment.
9. An electronic device, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the object identification processing method based on the internet of vision as claimed in any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which causes a processor to execute the method of processing an object identification based on a visual network according to any one of claims 1 to 7.
CN201911383579.3A 2019-12-27 2019-12-27 Object identification processing method, device, equipment and medium based on visual networking Active CN111198994B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911383579.3A CN111198994B (en) 2019-12-27 2019-12-27 Object identification processing method, device, equipment and medium based on visual networking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911383579.3A CN111198994B (en) 2019-12-27 2019-12-27 Object identification processing method, device, equipment and medium based on visual networking

Publications (2)

Publication Number Publication Date
CN111198994A true CN111198994A (en) 2020-05-26
CN111198994B CN111198994B (en) 2024-01-09

Family

ID=70744847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911383579.3A Active CN111198994B (en) 2019-12-27 2019-12-27 Object identification processing method, device, equipment and medium based on visual networking

Country Status (1)

Country Link
CN (1) CN111198994B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120054286A1 (en) * 2010-08-31 2012-03-01 Sap Ag Methods and systems for business interaction monitoring for networked business process
CN109302384A (en) * 2018-09-03 2019-02-01 视联动力信息技术股份有限公司 A kind of processing method and system of data
CN109522110A (en) * 2018-11-19 2019-03-26 视联动力信息技术股份有限公司 A kind of multiple task management system and method based on view networking
CN109963107A (en) * 2019-02-20 2019-07-02 视联动力信息技术股份有限公司 A kind of display methods and system of audio, video data
CN110072126A (en) * 2019-03-19 2019-07-30 视联动力信息技术股份有限公司 Data request method, association turn server and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120054286A1 (en) * 2010-08-31 2012-03-01 Sap Ag Methods and systems for business interaction monitoring for networked business process
CN109302384A (en) * 2018-09-03 2019-02-01 视联动力信息技术股份有限公司 A kind of processing method and system of data
CN109522110A (en) * 2018-11-19 2019-03-26 视联动力信息技术股份有限公司 A kind of multiple task management system and method based on view networking
CN109963107A (en) * 2019-02-20 2019-07-02 视联动力信息技术股份有限公司 A kind of display methods and system of audio, video data
CN110072126A (en) * 2019-03-19 2019-07-30 视联动力信息技术股份有限公司 Data request method, association turn server and computer readable storage medium

Also Published As

Publication number Publication date
CN111198994B (en) 2024-01-09

Similar Documents

Publication Publication Date Title
CN109672856B (en) Resource synchronization method and device
CN111193788A (en) Audio and video stream load balancing method and device
CN110190973B (en) Online state detection method and device
CN109996086B (en) Method and device for inquiring service state of video networking
CN109617956B (en) Data processing method and device
CN109474715B (en) Resource configuration method and device based on video network
CN109788247B (en) Method and device for identifying monitoring instruction
CN110602039A (en) Data acquisition method and system
CN110557319B (en) Message processing method and device based on video network
CN110035297B (en) Video processing method and device
CN109743555B (en) Information processing method and system based on video network
CN109743284B (en) Video processing method and system based on video network
CN111478791B (en) Data management method and device
CN110493149B (en) Message processing method and device
CN110022500B (en) Packet loss processing method and device
CN110166363B (en) Multicast link monitoring method and device
CN109698953B (en) State detection method and system for video network monitoring equipment
CN109743360B (en) Information processing method and device
CN110677315A (en) Method and system for monitoring state
CN110113555B (en) Video conference processing method and system based on video networking
CN110213533B (en) Method and device for acquiring video stream monitored by video network
CN110266768B (en) Data transmission method and system
CN109688073B (en) Data processing method and system based on video network
CN110062259B (en) Video acquisition method, system, device and computer readable storage medium
CN110446058B (en) Video acquisition method, system, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant