CN115550379A - Method and device for acquiring data from camera - Google Patents

Method and device for acquiring data from camera Download PDF

Info

Publication number
CN115550379A
CN115550379A CN202110961751.XA CN202110961751A CN115550379A CN 115550379 A CN115550379 A CN 115550379A CN 202110961751 A CN202110961751 A CN 202110961751A CN 115550379 A CN115550379 A CN 115550379A
Authority
CN
China
Prior art keywords
camera
data
protocol
server
data type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110961751.XA
Other languages
Chinese (zh)
Inventor
周东东
王旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2022/081986 priority Critical patent/WO2023273426A1/en
Publication of CN115550379A publication Critical patent/CN115550379A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/146Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/18Multiprotocol handlers, e.g. single devices capable of handling multiple protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Studio Devices (AREA)

Abstract

Disclosed are a method and a device for acquiring data from a camera, relating to the field of cameras, wherein the method comprises the following steps: the server obtains two read data requests, both using the same camera identification, and the two read data requests having different requested data types. The server obtains corresponding data from the cameras using different camera data transfer protocols corresponding to the two data types, respectively. According to the scheme, processing resources required by the server for inquiring the physical camera are reduced, the time consumed by the server for determining the camera is shortened, and the efficiency of the server for acquiring data from the camera is improved.

Description

Method and device for acquiring data from camera
The present application claims priority of chinese patent application with application number 202110739020.0, entitled "multi-protocol access camera" filed by the national intellectual property office at 30/6/2021, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of cameras, and in particular, to a method and an apparatus for acquiring data from a camera.
Background
The video monitoring system comprises a server and a camera. In general, the data types transmitted by different protocols are often inconsistent, and one physical camera can access a server by adopting multiple protocols. When the server registers the cameras, a single physical camera is logically treated as a plurality of cameras for management. Each time the camera uses a protocol to access the server, the server generates a corresponding camera identifier. Therefore, multiple logical identifications are often established for each camera in the server, which leads to repeated management of the cameras by the server and inefficient searching for the cameras.
Disclosure of Invention
The application provides a method and a device for acquiring data from a camera, which can solve the problem of low efficiency in camera searching caused by repeated management of the camera by a server.
The technical scheme is as follows.
In a first aspect, the present application provides a method for acquiring data from a camera, where the method is applicable to a server, or a device that may support the server to implement the method, for example, where the device includes a chip system, and the method for acquiring data from a camera includes: firstly, a server acquires a first read request for requesting first data and a second read request for requesting second data, wherein: the first and second requests carry the same camera identification, the data type of the first data is a first data type, the data type of the second data is a second data type different from the first data type, the first read request carries a first data type identification marking the first data type, and the second read request carries a second data type identification marking the first data type; secondly, the server also uses a first camera data transmission protocol supporting a first data type to acquire first data from a camera corresponding to the camera identification; in addition, the server acquires second data from the camera corresponding to the camera identification by using a second camera data transmission protocol supporting a second data type, wherein the second camera data transmission protocol is different from the first camera data transmission protocol.
In the method for acquiring data from a camera provided in this embodiment, a plurality of read requests of different protocols carry the same camera identifier, and the server may acquire data of a corresponding type from the camera according to the camera identifier. Because the reading requests of different protocols use the same camera identification, the total amount of the camera identification stored by the server is less than that in the prior art, so that the processing resources required by the server for inquiring the physical camera are reduced, the time for the server to determine the camera corresponding to the camera identification is shortened, and the efficiency of the server for acquiring data from the camera is improved.
In addition, since the same camera id is used for the read requests of the same camera, the read requests (and the data requested by the read requests) can be associated with each other, so that the association relationship between the read requests (and the data requested by the read requests) is established, and the management is performed based on the management relationship. Furthermore, the management function and the data processing function of the cameras by the server may be license purchase (license) according to the number of the cameras, and in this implementation, the number of the cameras registered in the server is reduced, so that the cost for purchasing the license is also reduced.
In an alternative implementation, the data type supported by the first camera data transmission protocol includes video, the unsupported data type includes image, and the first data is video; the data type supported by the second camera data transfer protocol includes an image, the data type not supported includes a video, and the second data is an image. The server can utilize the same camera identification and adopt different camera data transmission protocols to acquire different types of data from one camera, so that the server is prevented from repeatedly managing one camera by utilizing different logic identifications, and the processing efficiency of the camera management system to which the server belongs is improved.
In another optional implementation manner, the server acquires the first data by using a first camera data transmission protocol, including: the server schedules the first read request to a first protocol plug-in supporting a first camera data transmission protocol, and acquires first data from the camera based on the first camera data transmission protocol by using the first protocol plug-in. The server using a second camera data transfer protocol to obtain second data, comprising: the server dispatches a second read request to a second protocol plug-in supporting a second camera data transfer protocol, and acquires second data from the camera based on the second camera data transfer protocol using the second protocol plug-in. By way of example, the protocol plug-in refers to a functional component or software unit that supports implementation of the camera data transmission protocol, e.g., the first protocol plug-in may implement one or more functions of the first camera data transmission protocol.
Therefore, the server utilizes the protocol plug-in to realize the function of the camera data transmission protocol, the process that the server needs to inquire the camera data transmission protocol every time when acquiring data from the camera is avoided, the consumption of processing resources in the server is reduced, and the processing efficiency of the camera management system is improved.
In another alternative implementation, before the server obtains the first read request and obtains the second read request, the method includes: the server acquires a plug-in loading request from a client, wherein the loading request carries: loading an instruction of a first protocol plug-in and an instruction of a second protocol plug-in; the server also selects a first protocol plug-in and a second protocol plug-in from the plurality of locally configured protocol plug-ins to load respectively.
The protocol plug-in can refer to a lightweight access service (or microservice), that is, the protocol plug-in can be operated in the server as a process (or thread), and the server uses the protocol plug-in to realize the function of the camera data transmission protocol, so that the process that the server needs to query the camera data transmission protocol every time the server acquires data from the camera is avoided, the consumption of processing resources in the server is reduced, and the processing efficiency of the camera management system is improved.
In another alternative implementation manner, the server records a one-to-one correspondence relationship between the data type and the camera data transmission protocol. For example, the server uses a first camera data transfer protocol that supports a first data type, including: and the server selects a first protocol plug-in corresponding to the first data type according to the first data type. As another example, the server uses a second camera data transfer protocol that supports a second data type, comprising: and the server selects a second protocol plug-in corresponding to the second data type according to the second data type. Different data types correspond to different protocol plug-ins, the server can determine the protocol plug-ins required by the read request by using the data type identification carried by the read request, the read request is prevented from being matched with each protocol plug-in by the server, the efficiency of acquiring the data indicated by the read request by the server is improved, and the processing efficiency of the camera management system to which the server belongs is further improved.
In another optional implementation manner, the first protocol plug-in is provided with an upstream port and a downstream port, wherein: each uplink port is used for accessing one client; each downstream port is used to access one camera. In this embodiment, the first protocol plug-in may receive, through the multiple uplink ports, read requests sent by multiple clients, and obtain, from one or more cameras, data corresponding to the multiple read requests; the first protocol plug-in can also support a client to acquire data from a plurality of cameras through a plurality of downlink ports, so that the efficiency of the server for acquiring data from the cameras in parallel is improved, and the management performance of the camera management system is further improved.
In another alternative implementation, the first camera data transmission protocol and the second camera data transmission protocol are each one of the following protocols: the Technical requirements of information transmission, exchange and control of the public safety Video monitoring networking system (GB/T28181), the Open Network Video Interface Forum (Open Network Video Interface format, ONVIF) Protocol, the Real Time Streaming Protocol (RTSP), the Technical requirements of the public safety Video image information system (Technical requirements for Video and image information system, GA/T1400) and the proprietary Protocol of the manufacturer.
In another optional implementation manner, the method further includes: the server acquires the registration information of the camera, and registers the camera with the camera identification to support: a first camera data transmission protocol, a second camera data transmission protocol.
In the process of registering the camera, the server can determine the camera data transmission protocol supported by the camera, so that the camera data transmission protocol supported by the camera is prevented from being inquired again in the process of acquiring data from the camera by the server, the processing time of acquiring the data from the camera by the server is shortened, and the processing efficiency of the camera management system to which the server belongs is improved.
In another optional implementation manner, the method further includes: the server synchronizes the camera identification and the protocol corresponding to the camera identification to the client; the client is used for sending the first read request and the second read request. Therefore, the client can know the camera data transmission protocol supported by each camera identification, the process that the client inquires the camera data transmission protocol corresponding to the camera identification is avoided, the time for acquiring data from the camera is shortened, and the data acquisition efficiency is improved.
In another optional implementation manner, the method further includes: the server establishes a first corresponding relationship between the camera identification and the first data, and establishes a second corresponding relationship between the camera identification and the second data.
In another optional implementation manner, the server stores service data of one or more cameras, and the method further includes: a server acquires a query request from a client; the query request carries a camera identifier; the server acquires first data and second data associated with the camera from the business data according to the query request.
Therefore, one camera only has one unique device code (camera identification) in the server, different types of service data acquired by the one camera can be associated through the camera identification, and the client can view videos, search and identify intelligent pictures and the like through the camera identification under the condition that the experience of user services is unchanged. In addition, in the process of accessing the camera to the server, because the control (camera identification) and the configuration (camera data transmission protocol supported by the camera) are separated, the efficiency of accessing the camera to the server is improved, and the access cost of the camera is reduced.
In a second aspect, the present application provides an apparatus for acquiring data from a camera, the apparatus comprising means for performing the method for acquiring data from a camera of the first aspect or any one of the possible implementations of the first aspect. The apparatus may be implemented as a software module; or may be hardware with corresponding functions, such as a server, a Network Video Recorder (NVR), and the like.
The beneficial effects can be found in the description of any one of the first aspect, and are not described herein again. The apparatus for acquiring data from a camera has a function of implementing the actions in the method example of any of the first aspect described above. The functions can be realized by hardware, and can also be realized by hardware executing corresponding software, for example, the device for acquiring data from the camera is applied to a server, or a device for supporting the server to realize the method for acquiring data from the camera.
The hardware or software includes one or more modules corresponding to the functions described above. In one possible design, the means for acquiring data from the camera includes: a communication unit configured to obtain a first read request for requesting first data and a second read request for requesting second data, wherein: the first and second requests carry the same camera identification, the data type of the first data is a first data type, the data type of the second data is a second data type different from the first data type, the first read request carries a first data type identification marking the first data type, and the second read request carries a second data type identification marking the first data type; a first obtaining unit, configured to obtain first data from a camera corresponding to a camera identifier using a first camera data transmission protocol supporting a first data type; and a second acquiring unit, configured to acquire second data from the camera corresponding to the camera identifier using a second camera data transmission protocol that supports a second data type, where the second camera data transmission protocol is different from the first camera data transmission protocol.
In an alternative implementation, the data type supported by the first camera data transmission protocol includes video, the unsupported data type includes image, and the first data is video; the data types supported by the second camera data transfer protocol include images, the unsupported data types include video, and the second data is images.
In another optional implementation manner, the first obtaining unit is specifically configured to schedule a first read request to a first protocol plug-in supporting a first camera data transmission protocol, and obtain first data from the camera based on the first camera data transmission protocol by using the first protocol plug-in; and the second acquiring unit is specifically configured to schedule a second read request to a second protocol plug-in supporting a second camera data transmission protocol, and acquire second data from the camera based on the second camera data transmission protocol by using the second protocol plug-in.
In an optional implementation manner, the communication unit is further configured to obtain a plug-in loading request from the client, where the loading request carries: loading an indication of a first protocol plug-in and an indication of a second protocol plug-in; the apparatus for acquiring data from a camera further comprises: a loading unit; the loading unit is used for selecting a first protocol plug-in and a second protocol plug-in from a plurality of locally configured protocol plug-ins to respectively load.
In an optional implementation manner, a one-to-one correspondence relationship between the data type and the camera data transmission protocol is recorded in the server, where: the first obtaining unit is further specifically configured to select, according to the first data type, a first protocol plug-in corresponding to the first data type; the second obtaining unit is further specifically configured to select, according to the second data type, a second protocol plug-in corresponding to the second data type.
In an alternative implementation, the first protocol plug-in is provided with an upstream port and a downstream port, wherein: each uplink port is used for accessing one client; each downstream port is used to access one camera.
In an alternative implementation, the first camera data transmission protocol and the second camera data transmission protocol are each one of the following protocols: GB/T28181 protocol, ONVIF protocol, RTSP protocol, GA/T1400 protocol, and vendor proprietary protocols.
In an optional implementation manner, the communication unit is further configured to acquire registration information of the camera; the apparatus for acquiring data from a camera further comprises: a registration unit; the registration unit is used for registering the camera with the camera identification, and registering the camera as a support: a first camera data transmission protocol, a second camera data transmission protocol.
In an optional implementation manner, the communication unit is further configured to synchronize the camera identifier and a protocol corresponding to the camera identifier to the client; the client is used for sending the first read request and the second read request.
In an optional implementation manner, the apparatus for acquiring data from a camera further includes: an association unit; the association unit is used for establishing a first corresponding relation between the camera identification and the first data; the association unit is further configured to establish a second corresponding relationship between the camera identifier and the second data.
In an optional implementation manner, the server stores service data of one or more cameras; the communication unit is also used for acquiring a query request from the client; the query request carries the camera identification. The apparatus for acquiring data from a camera further comprises: a query unit; the query unit is used for acquiring first data and second data related to the camera from the business data according to the query request.
In a third aspect, the present application provides a server comprising a processor and an interface circuit, wherein the interface circuit is configured to receive and transmit signals from or send signals to other devices except the server, and the processor is configured to implement the operation steps of the method according to the first aspect or any one of the possible implementation manners of the first aspect through logic circuits or executing code instructions.
In a fourth aspect, the present application provides an imaging management system, including: a server and one or more cameras.
First, the server obtains a first read request for requesting first data and a second read request for requesting second data, wherein: the first and second requests carry the same camera identification, the camera identification corresponds to one of the one or more cameras, the data type of the first data is a first data type, the data type of the second data is a second data type different from the first data type, the first read request carries a first data type identification marking the first data type, and the second read request carries a second data type identification marking the first data type.
Secondly, the server acquires first data from the camera corresponding to the camera identification by using a first camera data transmission protocol supporting the first data type.
In addition, the server also acquires second data from the camera corresponding to the camera identification using a second camera data transmission protocol supporting a second data type, wherein the second camera data transmission protocol is different from the first camera data transmission protocol.
Therefore, the read requests of a plurality of different protocols carry the same camera identification, and the server can acquire the data of the corresponding type from the camera according to the camera identification. Because the reading requests of different protocols use the same camera identification, the total amount of the camera identification stored by the server is less than that in the prior art, so that the processing resources required by the server for inquiring the physical camera are reduced, the time for the server to determine the camera corresponding to the camera identification is shortened, and the efficiency of the server for acquiring data from the camera is improved.
It is noted that the camera management system comprises respective means for performing the method of the first aspect or any one of its possible implementations. The beneficial effects can be found in the description of any one of the first aspect, and are not described herein again. The camera management system has a function of implementing the behavior in the method example of any of the first aspect described above. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware.
In a fifth aspect, the present application provides a computer-readable storage medium, in which a computer program or instructions are stored, which, when executed by a processor in a server, are used to implement the operational steps of the method according to the first aspect or any one of the possible implementations of the first aspect.
In a sixth aspect, the present application provides a computer program product comprising instructions which, when executed on a server or processor, cause the server or processor to carry out the instructions to carry out the operational steps of the method as described in any one of the above aspects or possible implementations of any one of the above aspects.
In a seventh aspect, the present application provides a chip, comprising a logic circuit and an interface circuit, the interface circuit being configured to input information and/or output information; the logic circuit is configured to perform the above aspects or possible implementations of the aspects, process input information and/or generate output information.
For example, the interface circuit may obtain a first read request and a second read request, the logic circuit may implement the method for obtaining data from the camera according to the first aspect or the possible implementation manner of the first aspect for the first read request and the second read request, and the interface circuit may further obtain first data indicated by the first read request and second data indicated by the second read request from the camera.
The present application can further combine to provide more implementations on the basis of the implementations provided by the above aspects.
Drawings
Fig. 1 is a schematic diagram of a camera management system provided in the present application;
fig. 2 is a schematic diagram of a camera access server provided in the present application;
fig. 3 is a schematic display diagram of a camera accessing server according to the present application;
FIG. 4 is a first flowchart of a method for acquiring data from a camera according to the present application;
FIG. 5 is a second flowchart of a method for acquiring data from a camera according to the present application;
FIG. 6 is a schematic diagram of an apparatus for a method of acquiring data from a camera according to the present application;
fig. 7 is a schematic structural diagram of a server according to the present application.
Detailed Description
The terms "first," "second," and "third," etc. in the description and claims of this application and the above-described drawings are used for distinguishing between different objects and not for limiting a particular order.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
For clarity and conciseness of the description of the embodiments described below, a brief introduction of the related art is first given.
Fig. 1 is a schematic diagram of an image pickup management system provided in the present application, where the image pickup management system may include an image pickup device, a server, and a terminal device, and communication between the devices may be implemented through a network to implement transmission of video, images, or other information. The network may be the internet, or other network. The network may include one or more network devices, such as a router or switch.
In some cases, the image pickup apparatus illustrated in fig. 1 may refer to a dedicated apparatus having an image pickup function.
For example, in the video capturing process, the image capturing apparatus may be an image capturing device for road monitoring, the image capturing apparatus 111A is a dome camera, the image capturing apparatus 111B is an infrared dome camera, and the image capturing apparatus 111C is a 4K (resolution: 3840 × 2160) gun camera.
In other cases, the camera device shown in fig. 1 may refer to a general-purpose device with a camera function, such as a mobile phone, a tablet computer, or a smart wearable device with a camera function.
As shown in fig. 1, in the view management process, the server 120 may be a server, such as an application server supporting an application service that may provide a video service, an image service, other services based on video or images, and the like. The server 120 may also be a dedicated device that can acquire data from the cameras, such as NVR.
In addition, the server 120 may also refer to a data center, which may include one or more physical devices with video processing functions, such as a server, a mobile phone, a tablet computer, or other devices.
It is noted that the user can view the data stored in the server 120 through the client. The data being viewed may be pictures, videos, and the like. The user can also obtain real-time data shot by the camera shooting equipment through the client.
For example, a client refers to an application running on a physical machine or a virtual machine, which can obtain a service request and send the service request (e.g., a data read request) to the server 120.
The client may be a computer running the application program, and the computer running the application program may be a physical machine or a virtual machine.
When the client is a physical machine, the client may be any one of the terminal devices, and the embodiment of the present application does not limit the specific technology and the specific device form adopted by the client. The client can also be a device such as a mobile phone and a server running the application program.
As shown in fig. 1, in the video display process, the client may refer to any one of the terminal device 131 to the terminal device 133. If the terminal device 131 may be VR glasses, the user may control the viewing angle range by steering, and acquire data of the corresponding viewing angle range from the camera via the server 120; the terminal device 132 may be a mobile phone, and a user may control a viewing angle range on the terminal device 132 through a touch operation or a blank operation, and obtain a video or a picture, etc. acquired by the camera device 111; the terminal device 133 may be a personal computer, and the user may control the content displayed on the display screen, such as video, pictures, alarm information, etc., through an input device such as a mouse or a keyboard.
In one possible example, the server 120 used in the camera management process refers to a server cluster in which a plurality of servers are deployed, and the server cluster may have a rack, and the rack may establish communication for the plurality of servers through a wired connection, such as a Universal Serial Bus (USB) or peripheral component interconnect express (PCIe) high-speed bus.
Fig. 1 is a schematic diagram, and other devices, which are not shown in fig. 1, may also be included in the image pickup management system, and the embodiment of the present application does not limit the number and types of the devices included in the system.
In order to implement the method for acquiring data from a camera provided by the present application, on the basis of the camera management system shown in fig. 1, the present application provides an implementation manner for accessing a camera to a server, as shown in fig. 2, fig. 2 is a schematic diagram of a camera access server provided by the present application, a camera 210 may implement the functions of each camera device shown in fig. 1, and a server 220 may implement at least the functions of a server 120 shown in fig. 1.
For example, the server 220 may implement functions of accessing a camera, storing information such as video and audio of the camera, or instructing the camera to capture an image intelligently, providing Artificial Intelligence (AI) computing power to analyze information such as video or image, or controlling information such as a human face or a vehicle, and implementing searching images with images. The client 230 may refer to a physical machine or a virtual machine that obtains a user instruction, or an application program running on the physical machine or the virtual machine, such as the terminal device 131 to the terminal device 133 shown in fig. 1. In one possible implementation, client 230 may run on server 220.
As shown in fig. 2, the process of accessing a camera to a server provided by the present embodiment includes the following steps.
S210, the server 220 acquires registration information of the camera 210.
The registration information may be imported by server 220 from a database. Such as the database, records protocol information supported by the camera 210.
Optionally, the registration information may also be entered by the information manager using the client. Such as client 230 shown in fig. 2, or other clients in communication with server 220.
After the server 220 acquires the registration information, the server 220 registers the camera 210 with the camera identification of the camera 210, and registers the camera 210 as supporting: a first camera data transmission protocol, a second camera data transmission protocol.
In one possible scenario, the camera data transmission protocol supported by the camera 210 is related to the factory setting of the camera 210, for example, in the process of registering the camera identifier for the camera 210 by the server 220, the server 220 may select one or more protocols from a plurality of camera data transmission protocols of the factory setting for registration according to the user requirement.
For example, when the server 220 acquires registration information of a plurality of cameras, the cameras may be registered according to table 1 below.
TABLE 1
Physical camera Camera identification Supported camera data transfer protocol
Dome camera Camera identification 1 Protocol 1, protocol 2
4K gun type camera-1 Camera identification 2 Protocol 1, protocol 3, protocol 4
4K gun type camera-2 Camera identification 3 Protocol 1, protocol 2, protocol 3
Infrared ball-type camera Camera identification 4 Protocol 1, protocol 3
Wherein, the above protocol 1, protocol 2, protocol 3, and protocol 4 are any one of the following protocols: GB/T28181 protocol, ONVIF protocol, RTSP protocol, GA/T1400 protocol, and vendor proprietary protocols.
The physical camera, the camera identification and the camera data transmission protocol supported by the camera shown in table 1 are only examples provided in this embodiment, and should not be construed as limiting the application. In some possible cases, the camera identification may be represented by a continuous string of characters.
S220, the server 220 synchronizes the camera id and the protocol corresponding to the camera id to the client 230.
Alternatively, server 220 may synchronize table 1 described above to client 230.
In an example, if the registration information is imported from the database by the server 220, after the server 220 registers the camera 210, the camera identifier and the protocol corresponding to the camera identifier may be synchronized to the client 230 by using a synchronization process.
In another example, if the registration information is entered by the information manager using the client 230, the server 220 may synchronize the camera identifier of the camera 210 with the client 230 after registering the camera 210.
In this embodiment, the server 220 synchronizes the camera identifier and the protocol corresponding to the camera identifier to the client 230, and when the client 230 needs to acquire data from a camera, the client 230 may view the camera data synchronization protocol supported by the camera corresponding to the camera identifier according to the camera identifier, and further, the client 230 may acquire the data of the specified type from the specified camera by using the camera identifier.
S230, the server 220 obtains a plug-in loading request from the client 230.
The plug-in loading request carries: and loading the indication of the first protocol plug-in and the indication of the second protocol plug-in.
Furthermore, the server 220 may select a first protocol plug-in and a second protocol plug-in from the plurality of locally configured protocol plug-ins for loading respectively according to the plug-in loading request.
As shown in fig. 3, fig. 3 is a schematic display diagram of accessing a camera to a server according to the present application, and a user (user) may select an adding mode (video access, picture access, or video + picture access) of the camera and select a protocol (driving shown in fig. 3) used for accessing the camera to the server.
The view management platform code refers to a serial number or an identifier of a camera management system (or called a view management platform) provided by a server, a manufacturer refers to a manufacturer of the camera, and a server list are selected to indicate the servers in the camera management system to which the camera needs to be accessed. It is noted that in some possible scenarios, the list of servers is also referred to as a Network Video Recorder (NVR) list.
The user name, the password and the confirmation password are used for indicating an account used by a login client, and the access gateway is a Device Connection Gateway (DCG) which is responsible for managing the camera in a server to which the camera is about to access. The Internet Protocol (IP) address refers to an identification number assigned by the manufacturer to the camera 210, for example, the identification number may be used as a camera identifier of the camera 210 in the server 220, and in other possible cases, the camera identifier may be represented by other information.
For example, in an actual camera access process, the instruction to load the first protocol plug-in and the instruction to load the second protocol plug-in may refer to protocol information selected by the client 230, as shown in fig. 3, if the first protocol plug-in corresponds to the GB/T28281 protocol and the second protocol plug-in corresponds to the GA/T1400 protocol, the client 230 may select a driver as: "GB/T28281" and "GA/T1400".
On the basis of the image pickup management system shown in fig. 1, fig. 4 is a flowchart of a method for acquiring data from a camera provided by the present application, where the camera 211 may implement the function of the image pickup apparatus 111A shown in fig. 1, and the camera 212 and the camera 213 may implement the function of the image pickup apparatus 111C shown in fig. 1; the server 220 can at least implement the functions of the server 120 shown in fig. 1, for example, accessing a camera, storing information such as video and audio of the camera, or instructing the camera to capture a picture intelligently, providing Artificial Intelligence (AI) computing power to analyze information such as video or picture, or controlling information such as face or vehicle, so as to implement functions such as searching the picture with the picture.
As shown in fig. 4, a method for acquiring data from a camera provided in an embodiment of the present application includes the following steps.
S410, the server 220 obtains a first read request for requesting first data and a second read request for requesting second data.
The data type of the first data is a first data type.
The data type of the second data is a second data type different from the first data type.
The data types involved in this embodiment may be: video, pictures, alarm information, camera configuration parameters, etc. Wherein the data type of the second data may not be consistent with the data type of the first data. Illustratively, the first data type is video and the second data type is picture.
It is noted that the first read request and the second read request carry the same camera identification, and both read requests indicate the same camera. The camera identification 2 as shown in fig. 4 indicates the camera 212. For example, the first read request and the second read request may refer to service requests sent by a user to the server 220 through a client.
In one possible scenario, the first read request further carries a first data type identification that identifies the first data type, and the second read request further carries a second data type identification that identifies the second data type.
Optionally, a one-to-one correspondence between a data type and a camera data transmission protocol is recorded in the server, for example, the data type identifier may refer to a message number carried in the read request, where the message number is a field in the read request, and the message number may be used to indicate a service type of the read request, and data types required by different service types are different. Table 2 below is a table corresponding to the message number, the service type, and the data type provided in this example.
TABLE 2
Message number Type of service Data type
01 Viewing video Video
02 Viewing pictures Picture frame
03 AI video analytics Video
04 AI alarm Text
05 Parameter configuration Text
In a first alternative, as shown in table 2, a video (01) is viewed, the data type is a video, and the format of the video may include, but is not limited to: MPEG4, h.264, h.265.
In a second alternative, as shown in table 2, a picture (02) is viewed, the data type is a picture, and the format may include but is not limited to: M-JPEG, motion-JPEG. For example, viewing the picture (02) may refer to picture reporting and the like.
In a third alternative case, as shown in table 2, AI video analysis (03) may refer to AI analysis of a video, which may include, but is not limited to: object recognition, vehicle recognition, edge detection, object detection, or the like.
In a fourth alternative case, as shown in table 2, the AI alarm (04) may refer to an alarm after performing AI analysis on the video during the video monitoring process, and the alarm may be transmitted from the camera to the server in a text format.
In a fifth alternative case, as shown in table 2, the parameter configuration (05) may refer to text data transmitted by the server 220 for parameter configuration of the camera.
The five optional cases are only examples provided in this embodiment, and should not be construed as limitations on data type identification. In other alternative cases, the data type identification may also be other fields in the read request.
S420, the server 220 determines a camera corresponding to the camera identification.
One camera identification indicates one physical camera. As shown in fig. 4, camera identification 1 (shown by diamonds in fig. 4) indicates camera 211, camera identification 2 (shown by circles in fig. 4) indicates camera 212, and camera identification 3 (shown by squares in fig. 4) indicates camera 213.
In the case where the server 220 has access to multiple cameras, the camera identification carried by the read request may be used to determine the target camera from which data needs to be read.
S430, the server 220 obtains the first data from the camera corresponding to the camera identifier by using a first camera data transmission protocol supporting the first data type.
Optionally, the data type supported by the first camera data transmission protocol includes video, and the unsupported data type includes image, such as the above-mentioned first data is video.
The first camera data transmission protocol is, for example, the GB/T28181 protocol, the ONVIF protocol, the RTSP, or a proprietary vendor protocol.
In one possible scenario, the server 220 uses a first camera data transmission protocol supporting a first data type, including: the server 220 selects a first protocol plug-in corresponding to the first data type according to the first data type.
The first protocol plug-in is a functional component or software unit that supports implementation of a first camera data transmission protocol, and the first protocol plug-in may implement one or more functions of the first camera data transmission protocol. For example, if the first camera data transmission protocol is GB/T28181, ONVIF protocol or RTSP, the first protocol plug-in may implement the function of video transmission.
As shown in fig. 5, fig. 5 is a second flowchart of a method for acquiring data from a camera according to the present application, where a plurality of protocol plug-ins are deployed in the server 220, and the camera data transmission protocols supported by the protocol plug-ins may be different or not identical, where the function of one camera data transmission protocol is implemented by one protocol plug-in, for example, the function of the first camera data transmission protocol is implemented by the first protocol plug-in shown in fig. 5.
In an alternative implementation, the first protocol card may be provided with an upstream port and a downstream port, wherein: each uplink port is used for accessing one client; each downstream port is used to access one camera.
The uplink port and the downlink port refer to an access point (access point) used by the device to access the server 220, and the server 220 may insert or extract a signal through the access point to observe or measure a variable of a client or a camera. In addition, server 220 may route signals from one device to another through the access point.
In this embodiment, the uplink port and the downlink port may refer to logical channel endpoints of a Transmission Control Protocol (TCP) or a User Datagram Protocol (UDP). It is noted that the upstream and downstream ports may also refer to physical interfaces, where possible.
Thus, in this embodiment, the first protocol plug-in may receive, through the plurality of uplink ports, the read requests sent by the plurality of clients, and obtain data corresponding to the plurality of read requests from one or more cameras; the first protocol plug-in can also support a client to acquire data from a plurality of cameras through a plurality of downlink ports, so that the efficiency of the server for acquiring data from the cameras in parallel is improved, and the management performance of the camera management system is further improved.
In one possible example, as shown in fig. 5, the above server 220 using the first camera data transmission protocol to acquire the first data may include: the server 220 schedules the first read request to a first protocol plug-in supporting the first camera data transfer protocol and acquires the first data from the camera based on the first camera data transfer protocol using the first protocol plug-in.
It should be noted that the protocol plug-in may refer to a lightweight access service (or a micro service), that is, the protocol plug-in may operate in the server as a process (or a thread), and the server uses the protocol plug-in to implement a function of a camera data transmission protocol, thereby avoiding a process in which the server needs to query the camera data transmission protocol every time the server acquires data from the camera, improving efficiency of the server in finding the camera, reducing consumption of processing resources in the server, and improving processing efficiency of the camera management system.
With continued reference to fig. 4, the method for acquiring data from a camera according to this embodiment further includes the following step S440.
S440, the server 220 acquires the second data from the camera corresponding to the camera identification using a second camera data transmission protocol supporting the second data type.
Wherein the second camera data transmission protocol is different from the first camera data transmission protocol. Illustratively, the data types supported by the second camera data transfer protocol include images, and the unsupported data types include video, such as the second data described above being images.
For example, the second camera data transmission protocol is the technical requirements for the public security video image information system (GA/T1400), the proprietary protocol of the manufacturer, and so on. GA/T1400 is not able to transmit long videos, so in a typical camera usage scenario GA/T1400 can be considered to be unable to transmit video capabilities.
In one possible scenario, the server 220 uses a second camera data transfer protocol supporting a second data type, including: the server 220 selects a second protocol plug-in corresponding to the second data type according to the second data type.
The second protocol plug-in refers to a functional component or a software unit supporting implementation of the second camera data transmission protocol, and the second protocol plug-in may implement one or more functions of the second camera data transmission protocol. For example, if the data transmission protocol of the second camera is GA/T1400 or a vendor proprietary protocol, the second protocol plug-in may implement a function of reporting the picture.
As shown in fig. 5, the function of the second camera data transmission protocol is implemented by the second protocol plug-in shown in fig. 5.
In an alternative implementation, the second protocol plug-in may be provided with an upstream port and a downstream port, wherein: each uplink port is used for accessing one client; each downstream port is used to access one camera. For specific implementation of the uplink port and the downlink port, reference may be made to the related description of the first protocol plug, and details are not described here.
Thus, in this embodiment, the second protocol plug-in may receive, through the multiple uplink ports, the read requests sent by the multiple clients, and obtain, from one or more cameras, data corresponding to the multiple read requests; the second protocol plug-in can also support a client to acquire data from a plurality of cameras through a plurality of downlink ports, so that the efficiency of the server for acquiring data from the cameras in parallel is improved, and the management performance of the camera management system is further improved.
In one possible example, as shown in fig. 5, the above server 220 using the second camera data transmission protocol to acquire the second data may include: server 220 schedules the second read request to a second protocol plug-in that supports a second camera data transfer protocol and uses the second protocol plug-in to obtain second data from the camera based on the second camera data transfer protocol.
In a general technical solution, when a plurality of read requests with different protocols indicate different types of data in one camera, each type of data is transmitted by using different camera data transmission protocols, so that camera identifiers carried by each read request are different, and the plurality of different camera identifiers indicate one physical camera, and a server needs to determine, according to each camera identifier in the plurality of different camera identifiers, the data indicated by each read request, which results in low efficiency of the server in acquiring data from the camera.
In contrast, in the method for acquiring data from a camera provided in this embodiment, a plurality of read requests of different protocols carry the same camera identifier, and the server may acquire data of a corresponding type from the camera according to the camera identifier. Because the reading requests of different protocols use the same camera identification, the total amount of the camera identification stored by the server is less than that in the scheme, thereby reducing the processing resources required by the server for inquiring the physical camera and the time for the server to determine the camera corresponding to the camera identification, and improving the efficiency of the server for acquiring data from the camera.
In addition, since the same camera identification is used for the read requests of the same camera, the read requests (and the data requested by the read requests) can be associated, so that the association relationship between the read requests (and the data requested by the read requests) is established, and the management is performed based on the management relationship. Furthermore, the management function and the data processing function of the cameras by the server may be license purchase (license) according to the number of the cameras, and in this implementation, the number of the cameras registered in the server is reduced, so that the cost for purchasing the license is also reduced.
As such, in the method for acquiring data from a camera provided by the present application:
firstly, the server makes the camera data transmission protocol into protocol plug-ins, each protocol plug-in can operate as a micro-service of an independent process, and the DCG schedules the protocol plug-ins, and the scheduling process includes, but is not limited to: plug-in installation, start, stop, upgrade, rollback, offload, plug-in service function message routing, and the like. Because each protocol plug-in is independently issued and operated, each protocol plug-in does not influence each other in the working process, thereby avoiding the other protocol plug-ins from generating faults due to the fault of one protocol plug-in and reducing the influence on the camera shooting management system in the service execution process.
Secondly, the client can determine the protocol plug-in used by the camera to access the server according to the user requirements, and for the same camera, the camera identification of the camera in the server is consistent, so that the data disorder caused by complex association of a plurality of protocols is avoided, and the efficiency of managing a plurality of cameras by the server is improved.
Finally, the server can acquire various types of data from the camera corresponding to the camera identification according to the camera identification, processing resources required by the server for inquiring the physical camera are reduced, the time for the server to determine the camera corresponding to the camera identification is shortened, and the efficiency of acquiring the data from the camera by the server is improved.
In a possible embodiment, when a user adds a camera in the camera management system through a client, a plurality of access protocol plug-in combinations are selected, for example, the user needs the camera management system to access a manufacturer a camera, and needs to interface the video of the camera, the target identification structured data and the thermal imaging human body temperature measurement function. At this time, the camera management system already has standard protocols GB/T28181 plug-in and GA/T1400 plug-in, so the system only needs to use vendor A private Software Development Kit (SDK) to realize lightweight protocol plug-in, and the thermal imaging human body temperature measurement function of the camera is connected, when the camera management system is added with the camera, 3 plug-ins are selected, when the user has business operation, the DCG dispatches video playing, structural data subscription and thermal imaging human body temperature measurement request messages to the GB/T28181 plug-in, the GA/T1400 plug-in and the private SDK plug-in respectively, and summarizes and feeds back the data responded by each plug-in to the client used by the user.
Similarly, when the user needs the camera shooting management system to dock the camera of the manufacturer B, the camera shooting management system can multiplex the standard protocol plug-in, and only the private SDK plug-in (manufacturer B) is used for developing a small number of characteristic functions, so that the aim of improving the docking efficiency of the camera and the server in the camera shooting management system is fulfilled.
As an alternative implementation manner, the server 220 shown in fig. 2 to fig. 5 may further associate data corresponding to the same camera identifier, for example, the server 220 may establish a first corresponding relationship between the camera identifier and the first data, and establish a second corresponding relationship between the camera identifier and the second data. As shown in table 3 below, the server 220 may also store different physical cameras, camera identifications, and various traffic data.
TABLE 3
Physical camera Camera identification Stored business data
Dome camera 001 Data 3 and data 4
4K gun type camera-1 002 Data 1 and data 2
4K gun type camera-2 003 Data 5 and data 6
Infrared ball-type camera 004 Data 7
For example, if the camera 212 is the 4K gun camera-1 shown in table 3 and the camera mark of the camera 212 is "002", the first data (data 1) and the second data (data 2) corresponding to "002" can be associated with each other. It is noted that the server may store therein the service data of one or more cameras, such as data 1 to data 7 shown in table 3.
For example, the server 220 may further obtain a query request from the client, where the query request carries the camera identification (002 shown in table 3) of the camera 212; thus, the server 220 may obtain the first data and the second data (data 1, data 2 as shown in table 3) associated with the camera 212 from the locally stored service data.
Therefore, one camera only has one unique device code (camera identification) in the server, different types of service data acquired by the one camera can be associated through the camera identification, and the client can view videos, search and identify intelligent pictures and the like through the camera identification under the condition that the experience of user services is unchanged. In addition, in the process of accessing the camera to the server, because the control (camera identification) and the configuration (camera data transmission protocol supported by the camera) are separated, the efficiency of accessing the camera to the server is improved, and the access cost of the camera is reduced.
It is understood that, in order to implement the functions in the above embodiments, the server includes a corresponding hardware structure and/or software module for performing each function. The elements of each example, and method steps, described in connection with the embodiments disclosed herein may be embodied as hardware or in a combination of hardware and computer software. Whether a function is performed in hardware or computer software driven hardware depends on the specific application scenario and design constraints of the solution.
Fig. 6 and fig. 7 are schematic structural diagrams of a possible apparatus and server for acquiring data from a camera according to an embodiment of the present application. The device and the server for acquiring data from the camera can be used for realizing the functions of the server 220 in the above method embodiment, and therefore, the beneficial effects of the above method embodiment can also be realized. In the embodiment of the present application, the device for acquiring data from the camera may be the server 120 shown in fig. 1, and may also be a module (e.g., a chip) applied to the server 120.
As shown in fig. 6, the apparatus 600 for acquiring data from a camera includes a communication unit 610, a first acquisition unit 620, a second acquisition unit 630, a loading unit 640, a registration unit 650, an association unit 660, and an inquiry unit 670, and the apparatus 600 for acquiring data from a camera is used to implement the functions of the server in the method embodiments shown in fig. 2 to 5.
When the apparatus 600 for acquiring data from a camera is used to implement the functions of the server 220 in the embodiment of the method shown in fig. 2: the communication unit 610 is configured to execute S210 to S230, and the loading unit 640 is configured to select a first protocol plug-in and a second protocol plug-in from a plurality of locally stored protocol plugs to load respectively; the registration unit 650 is configured to register the camera with the camera identification, registering the camera as supporting: a first camera data transmission protocol, a second camera data transmission protocol.
When the apparatus 600 for acquiring data from a camera is used to implement the function of the server 220 in the method embodiment shown in fig. 4: the communication unit 610 is configured to perform S410, the first obtaining unit 620 is configured to perform S430, and the second obtaining unit 630 is configured to perform S440.
When the apparatus 600 for acquiring data from a camera is used to implement the functions of the server 220 in the embodiment of the method shown in fig. 5: the communication unit 610 is configured to perform S410, the first obtaining unit 620 is configured to perform S430, and the second obtaining unit 630 is configured to perform S440.
The association unit 660 is configured to establish a first corresponding relationship between the camera identifier and the first data, and establish a second corresponding relationship between the camera identifier and the second data. The query unit 670 is configured to obtain the first data and the second data associated with the camera from the service data according to the query request.
Optionally, the apparatus 600 for acquiring data from a camera may further include a storage unit, where the storage unit may be configured to record a one-to-one correspondence between a data type and a data transmission protocol of the camera, and the storage unit may further store service data of one or more cameras.
With regard to the communication unit 610, the first obtaining unit 620, the second obtaining unit 630, the loading unit 640, the registering unit 650, the associating unit 660, and the querying unit 670, more detailed descriptions may be directly obtained by directly referring to the related descriptions in the method embodiments shown in fig. 2 to fig. 5, which are not described herein again.
As shown in fig. 7, fig. 7 is a schematic structural diagram of a server provided in the present application, and the server 700 includes a processor 710 and an interface circuit 720. Processor 710 and interface circuit 720 are coupled to each other. It will be appreciated that interface circuit 720 may be a transceiver or an input-output interface. Optionally, the server 700 may further include a memory 730 for storing instructions executed by the processor 710 or for storing input data required by the processor 710 to execute the instructions or for storing data generated by the processor 710 after executing the instructions.
The server 700 includes a processor 710 and a communication interface 720. Processor 710 and communication interface 720 are coupled to one another. It is to be appreciated that communication interface 720 can be a transceiver or an input-output interface. Optionally, server 700 may also include a memory 730 for storing instructions to be executed by processor 710 or for storing input data required by processor 710 to execute the instructions, or for storing data generated by processor 710 after executing the instructions.
When the server 700 is used to implement the methods shown in fig. 2 to 5, the processor 710 and the interface circuit 720 are configured to perform the functions of the communication unit 610, the first obtaining unit 620, the second obtaining unit 630, the loading unit 640, the registering unit 650, the associating unit 660, and the querying unit 670. The processor 710, communication interface 720, and memory 730 may further cooperate to implement the various operational steps in a method of acquiring data from a camera performed by a server. The server 700 may also perform the functions of the apparatus 600 for acquiring data from a camera shown in fig. 6, which are not described herein.
The specific connection medium among the communication interface 720, the processor 710 and the memory 730 is not limited in the embodiments of the present application. In the embodiment of the present application, the communication interface 720, the processor 710, and the memory 730 are connected by a bus 740 in fig. 7, the bus is represented by a thick line in fig. 7, and the connection manner between other components is merely illustrative and not limited. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 7, but this is not intended to represent only one bus or type of bus.
The memory 730 can be used for storing software programs and modules, such as program instructions/modules corresponding to the method for acquiring data from a camera provided by the embodiments of the present application, and the processor 710 can execute various functional applications and data processing by executing the software programs and modules stored in the memory 730. The communication interface 720 may be used for signaling or data communication with other devices. The server 700 may have a plurality of communication interfaces 720 in this application.
It is understood that the Processor in the embodiments of the present Application may be a Central Processing Unit (CPU), a Neural Processor (NPU), a Graphic Processing Unit (GPU), other general purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), other Programmable logic devices (FPGAs), other transistor logic devices, hardware components, or any combination thereof. The general purpose processor may be a microprocessor, but may be any conventional processor.
The method steps in the embodiments of the present application may be implemented by hardware, or may be implemented by software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, read-Only Memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in a network device or a terminal device. Of course, the processor and the storage medium may reside as discrete components in a network device or a terminal device.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network appliance, a user device, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire or wirelessly. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape; or an optical medium, such as a Digital Video Disc (DVD); it may also be a semiconductor medium, such as a Solid State Drive (SSD).
In the embodiments of the present application, unless otherwise specified or conflicting with respect to logic, the terms and/or descriptions in different embodiments have consistency and may be mutually cited, and technical features in different embodiments may be combined to form a new embodiment according to their inherent logic relationship.
In this application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a alone, A and B together, and B alone, wherein A and B may be singular or plural. In the text description of the present application, the character "/" generally indicates that the preceding and following associated objects are in an "or" relationship; in the formula of the present application, the character "/" indicates that the preceding and following related objects are in a relationship of "division". Furthermore, for elements (elements) that appear in the singular form "a," an, "and" the, "they are not intended to mean" one or only one "unless the context clearly dictates otherwise, but rather" one or more than one. For example, "a device" means for one or more such devices. Still further, at least one (at least one of a).
It is to be understood that the various numerical references referred to in the embodiments of the present application are merely for descriptive convenience and are not intended to limit the scope of the embodiments of the present application. The sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of the processes should be determined by their functions and inherent logic.

Claims (25)

1. A method of acquiring data from a camera, the method comprising:
the server obtains a first read request for requesting first data and a second read request for requesting second data, wherein: the first and second requests carry the same camera identifier, the data type of the first data is a first data type, the data type of the second data is a second data type different from the first data type, the first read request carries a first data type identifier marking the first data type, and the second read request carries a second data type identifier marking the first data type;
acquiring the first data from the camera corresponding to the camera identification by using a first camera data transmission protocol supporting the first data type;
and acquiring the second data from the camera corresponding to the camera identification by using a second camera data transmission protocol supporting the second data type, wherein the second camera data transmission protocol is different from the first camera data transmission protocol.
2. The method of claim 1, wherein:
acquiring the first data using the first camera data transfer protocol, including:
scheduling the first read request to a first protocol plug-in supporting the first camera data transmission protocol, and acquiring the first data from the camera based on the first camera data transmission protocol by using the first protocol plug-in;
acquiring the second data using the second camera data transfer protocol, comprising:
and scheduling the second read request to a second protocol plug-in supporting the second camera data transmission protocol, and acquiring the second data from the camera based on the second camera data transmission protocol by using the second protocol plug-in.
3. The method of claim 2, wherein prior to the server fetching the first read request and fetching the second read request, the method comprises:
the server acquires a plug-in loading request from a client, wherein the loading request carries: an instruction to load the first protocol plug-in and an instruction to load the second protocol plug-in;
and the server selects the first protocol plug-in and the second protocol plug-in from a plurality of locally configured protocol plug-ins to be respectively loaded.
4. The method according to claim 2 or 3, wherein a one-to-one correspondence between data types and camera data transmission protocols is recorded in the server, wherein:
using a first camera data transfer protocol that supports the first data type, comprising: selecting the first protocol plug-in corresponding to the first data type according to the first data type;
using a second camera data transfer protocol that supports the second data type, comprising: and selecting the second protocol plug-in corresponding to the second data type according to the second data type.
5. Method according to any of claims 2-4, wherein the first protocol card is provided with an upstream port and a downstream port, wherein:
each uplink port is used for accessing a client;
each downstream port is used for accessing a camera.
6. The method of claim 1, wherein:
the data types supported by the first camera data transmission protocol comprise videos, the unsupported data types comprise images, and the first data are videos;
the data types supported by the second camera data transmission protocol comprise images, the unsupported data types comprise videos, and the second data is images.
7. The method according to any of claims 1-6, wherein the first camera data transmission protocol and the second camera data transmission protocol are each one of the following protocols:
GB/T28181 protocol, ONVIF protocol, RTSP protocol, GA/T1400 protocol, and vendor proprietary protocols.
8. The method according to any one of claims 1-7, further comprising:
the server acquires registration information of the camera;
the server registers the camera with the camera identification, registering the camera as supporting: the first camera data transmission protocol and the second camera data transmission protocol, wherein the camera identification of the camera is unique.
9. The method of claim 8, further comprising:
the server synchronizes the camera identification and the protocol corresponding to the camera identification to the client; the client is used for sending the first read request and the second read request.
10. The method according to any one of claims 1-9, further comprising:
the server establishes a first corresponding relation between the camera identification and the first data;
and the server establishes a second corresponding relation between the camera identification and the second data.
11. The method according to any one of claims 1-10, wherein the server has stored therein traffic data for one or more cameras, the method further comprising:
the server acquires a query request from a client; the query request carries the camera identification;
and the server acquires the first data and the second data related to the camera from the service data stored in the server according to the query request.
12. An apparatus for acquiring data from a camera, the apparatus comprising:
a communication unit, configured to obtain a first read request requesting first data and a second read request requesting second data, where: the first and second requests carry the same camera identifier, the data type of the first data is a first data type, the data type of the second data is a second data type different from the first data type, the first read request carries a first data type identifier marking the first data type, and the second read request carries a second data type identifier marking the first data type;
a first obtaining unit, configured to obtain the first data from the camera corresponding to the camera identifier by using a first camera data transmission protocol that supports the first data type;
a second obtaining unit, configured to obtain the second data from the camera corresponding to the camera identifier using a second camera data transmission protocol that supports the second data type, where the second camera data transmission protocol is different from the first camera data transmission protocol.
13. The apparatus of claim 12, wherein:
the first obtaining unit is specifically configured to schedule the first read request to a first protocol plug-in that supports the first camera data transmission protocol, and obtain the first data from the camera based on the first camera data transmission protocol by using the first protocol plug-in;
the second obtaining unit is specifically configured to schedule the second read request to a second protocol plug-in that supports the second camera data transmission protocol, and obtain the second data from the camera based on the second camera data transmission protocol by using the second protocol plug-in.
14. The apparatus according to claim 13, wherein the communication unit is further configured to obtain a plug-in loading request from a client, where the loading request carries: an indication to load the first protocol plug-in, an indication to load the second protocol plug-in;
the device further comprises: a loading unit;
the loading unit is configured to select the first protocol plug-in and the second protocol plug-in from a plurality of locally configured protocol plug-ins for respective loading.
15. The apparatus according to claim 13 or 14, wherein the apparatus further comprises a storage unit, and the storage unit records therein a one-to-one correspondence relationship between data types and camera data transmission protocols, wherein:
the first obtaining unit is further specifically configured to select, according to the first data type, the first protocol plug-in corresponding to the first data type;
the second obtaining unit is further specifically configured to select, according to the second data type, the second protocol plug-in corresponding to the second data type.
16. The apparatus according to any of claims 13-15, wherein the first protocol card is provided with an upstream port and a downstream port, wherein:
each uplink port is used for accessing a client;
each downstream port is used for accessing a camera.
17. The apparatus of claim 12, wherein:
the data types supported by the first camera data transmission protocol comprise videos, the unsupported data types comprise images, and the first data are videos;
the data types supported by the second camera data transmission protocol comprise images, the unsupported data types comprise videos, and the second data is images.
18. The apparatus of any of claims 12-17, wherein the first camera data transmission protocol and the second camera data transmission protocol are each one of:
GB/T28181 protocol, ONVIF protocol, RTSP protocol, GA/T1400 protocol, and vendor proprietary protocols.
19. The apparatus according to any one of claims 12-18, wherein the communication unit is further configured to obtain registration information of the camera;
the device further comprises: a registration unit;
the registration unit is configured to register the camera with the camera identifier, and register the camera as supporting: the first camera data transmission protocol, the second camera data transmission protocol.
20. The apparatus of claim 19, wherein the communication unit is further configured to synchronize the camera identifier and a protocol corresponding to the camera identifier to a client; the client is used for sending the first read request and the second read request.
21. The apparatus according to any one of claims 12-20, further comprising: an association unit;
the association unit is used for establishing a first corresponding relation between the camera identification and the first data;
the association unit is further configured to establish a second correspondence between the camera identifier and the second data.
22. The device according to any one of claims 12-21, wherein the device has a memory unit in which the traffic data of one or more cameras is stored;
the communication unit is further used for acquiring a query request from a client; the query request carries the camera identification;
the device further comprises: a query unit;
the query unit is configured to obtain the first data and the second data associated with the camera from the service data stored in the storage unit according to the query request.
23. A server comprising a processor and interface circuitry for receiving and transmitting signals to or from a device other than the server, the processor being arranged to implement the method of any of claims 1 to 11 by means of logic circuitry or executing code instructions.
24. An imaging management system, comprising: a server and one or more cameras;
the server obtains a first read request for requesting first data and a second read request for requesting second data, wherein: the first and second requests carry the same camera identification, the camera identification corresponds to one of the one or more cameras, the data type of the first data is a first data type, the data type of the second data is a second data type different from the first data type, the first read request carries a first data type identification marking the first data type, and the second read request carries a second data type identification marking the first data type;
the server acquires the first data from the camera corresponding to the camera identification by using a first camera data transmission protocol supporting the first data type;
and the server acquires the second data from the camera corresponding to the camera identification by using a second camera data transmission protocol supporting the second data type, wherein the second camera data transmission protocol is different from the first camera data transmission protocol.
25. A computer-readable storage medium, in which a computer program or instructions is stored which, when executed by a processor in a server, implements the method of any one of claims 1 to 11.
CN202110961751.XA 2021-06-30 2021-08-20 Method and device for acquiring data from camera Pending CN115550379A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/081986 WO2023273426A1 (en) 2021-06-30 2022-03-21 Method and apparatus for acquiring data from camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110739020 2021-06-30
CN2021107390200 2021-06-30

Publications (1)

Publication Number Publication Date
CN115550379A true CN115550379A (en) 2022-12-30

Family

ID=84722955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110961751.XA Pending CN115550379A (en) 2021-06-30 2021-08-20 Method and device for acquiring data from camera

Country Status (1)

Country Link
CN (1) CN115550379A (en)

Similar Documents

Publication Publication Date Title
George et al. Mez: An adaptive messaging system for latency-sensitive multi-camera machine vision at the iot edge
CN107566786B (en) Method and device for acquiring monitoring video and terminal equipment
TWI435279B (en) Monitoring system, image capturing apparatus, analysis apparatus, and monitoring method
US10516856B2 (en) Network video recorder cluster and method of operation
CN109636514B (en) Business data processing method and device, computing equipment and storage medium
CN111314646B (en) Image acquisition method, image acquisition device, terminal device and readable storage medium
CN111193900B (en) Monitoring video sharing method and device and storage medium
US10143033B2 (en) Communications apparatus, control method, and storage medium
CN114125024B (en) Audio transmission method, electronic device and readable storage medium
CN112632124B (en) Multimedia information acquisition method, device, system, storage medium and electronic device
CN110750206A (en) Data processing method, device and system
CN110855947B (en) Image snapshot processing method and device
CN115550379A (en) Method and device for acquiring data from camera
CN106412492B (en) Video data processing method and device
WO2023273426A1 (en) Method and apparatus for acquiring data from camera
CN112383617B (en) Method, device, terminal equipment and medium for performing long connection
CN111901561B (en) Video data processing method, device and system in monitoring system and storage medium
CN113055350B (en) Data transmission method, device, equipment and readable storage medium
JP2015528260A (en) Method and device for providing communication connections for multiple candidate applications within a mobile device
CN115866011A (en) Data transmission method, data transmission device, computer equipment and computer readable storage medium
CN117857653B (en) Method for endowing video capability to internet of things (IoT) device based on Internet of things operating system
CN110673919A (en) Screen capturing method and device
CN114115524B (en) Interaction method of intelligent water cup, storage medium and electronic device
CN110769264B (en) Multimedia transmission method, apparatus, medium, and system thereof
CN111881151B (en) Traffic violation data management method and device and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination