CN109698850B - Processing method and system - Google Patents

Processing method and system Download PDF

Info

Publication number
CN109698850B
CN109698850B CN201710998860.2A CN201710998860A CN109698850B CN 109698850 B CN109698850 B CN 109698850B CN 201710998860 A CN201710998860 A CN 201710998860A CN 109698850 B CN109698850 B CN 109698850B
Authority
CN
China
Prior art keywords
server
camera
processing
video data
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710998860.2A
Other languages
Chinese (zh)
Other versions
CN109698850A (en
Inventor
谢大斌
陈宇
刘强
翁志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201710998860.2A priority Critical patent/CN109698850B/en
Publication of CN109698850A publication Critical patent/CN109698850A/en
Application granted granted Critical
Publication of CN109698850B publication Critical patent/CN109698850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present disclosure provides a processing method, applied to a first server, the method including: processing video data acquired by at least one first camera to obtain one or more first images; and uploading at least one of the one or more first images to a second server, wherein the first server is different from the second server.

Description

Processing method and system
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a processing method and a processing system.
Background
With the rapid development of artificial intelligence, automatic control, communication, and computer technology, camera devices are increasingly used in a variety of life scenes. For example, in scenes such as intelligent stores and unmanned supermarkets, on-site video data acquired by a camera device is used as a basis, so that the requirements such as face recognition, object detection and passenger flow analysis are met.
However, in the process of implementing the concept of the present invention, the inventor finds that in the prior art, at least the problem that the live video data collected by the camera device needs to be uploaded to the background server for the background server to perform corresponding processing and analysis, and the direct uploading of the video data occupies a relatively large bandwidth.
Disclosure of Invention
In view of the above, the present disclosure provides a processing method and a processing system with low bandwidth occupation.
One aspect of the present disclosure provides a processing method applied to a first server, the method including: the method comprises the steps of processing video data acquired by at least one first camera to obtain one or more first images, and uploading at least one of the one or more first images to a second server, wherein the first server is different from the second server.
According to an embodiment of the present disclosure, the processing the video data acquired by the at least one first camera to obtain one or more first images includes: and decoding the video data to obtain at least one video frame, and encoding a specific video frame in the at least one video frame into a first image.
According to an embodiment of the present disclosure, the method further includes: and controlling at least one second camera to acquire one or more second images, and uploading at least one of the one or more second images to the second server.
According to an embodiment of the present disclosure, the method further includes: the method comprises the steps of obtaining a grappling task from a third server, and synchronizing the grappling task into the first server, wherein the first server is different from the third server.
According to an embodiment of the present disclosure, the processing of the video data acquired by the at least one first camera includes: and processing the video data acquired by the corresponding at least one first camera according to the image capture task, wherein the image capture tasks corresponding to different first cameras are the same or different.
According to an embodiment of the present disclosure, the method further includes: and uploading the execution state of the grapple task to the third server.
Another aspect of the present disclosure provides a processing system applied to a first server, the system including: the device comprises a processing module and a first uploading module. The processing module processes video data acquired by at least one first camera to obtain one or more first images. A first upload module uploads at least one of the one or more first images to a second server, wherein the first server is different from the second server.
According to an embodiment of the present disclosure, the processing the video data acquired by the at least one first camera to obtain one or more first images includes: the method comprises the steps of decoding the video data to obtain at least one video frame, and encoding a specific video frame in the at least one video frame into a first image.
According to an embodiment of the present disclosure, the system further includes: the device comprises a control module and a second uploading module. The control module controls at least one second camera to acquire one or more second images. A second upload module uploads at least one of the one or more second images to the second server.
According to an embodiment of the present disclosure, the system further includes: the device comprises an acquisition module and a synchronization module. The acquisition module acquires the grab image task from the third server. A synchronization module synchronizes the grapple task into the first server, wherein the first server is different from the third server.
According to an embodiment of the present disclosure, the processing of the video data acquired by the at least one first camera includes: and processing the video data acquired by the corresponding at least one first camera according to the image capture task, wherein the image capture tasks corresponding to different first cameras are the same or different.
According to an embodiment of the present disclosure, the system further includes: and the third uploading module uploads the execution state of the grapple task to the third server.
Another aspect of the present disclosure provides a processing system comprising: one or more processors; a storage device to store one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method as described above.
Another aspect of the disclosure provides a computer-readable medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the method as described above.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, the problem of high occupied bandwidth caused by directly uploading video data in the prior art can be at least partially solved, and therefore, the technical effect of low occupied bandwidth can be achieved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates an exemplary system architecture of a processing method and a processing system that may be applied to a first server according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow chart of a processing method according to an embodiment of the present disclosure;
FIG. 3 schematically shows a flow chart of a processing method according to another embodiment of the present disclosure;
FIG. 4 schematically shows a block diagram of a processing system according to an embodiment of the present disclosure;
FIG. 5 schematically shows a block diagram of a processing system according to another embodiment of the present disclosure; and
FIG. 6 schematically shows a block diagram of a computer system suitable for implementing the processing method according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "a or B" should be understood to include the possibility of "a" or "B", or "a and B".
The embodiment of the disclosure provides a processing method applied to a first server and a processing system capable of applying the method. The method comprises the following steps: the method comprises the steps of processing video data acquired by at least one first camera to obtain one or more first images, and uploading at least one of the one or more first images to a second server, wherein the first server is different from the second server.
Fig. 1 schematically illustrates an exemplary system architecture 100 of a processing method and processing system that may be applied to a first server according to an embodiment of the disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include image capture devices 101, 102, 103, a first server 104, a network 105, and a second server 106. The first server 104 can accept video data collected by the cameras 101, 102, 103. The network 105 is used to provide a medium for a communication link between the first server 104 and the second server 106. Network 105 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The first server 104 interacts with the second server 106 via the network 105 to receive or send messages or the like. The first server may be various electronic devices, including but not limited to a computer, a micro server, etc., which can be connected to the cameras 101, 102, 103, receive video data collected by the cameras, and process the video data.
The second server 106 may be a server that provides various services, such as a background management server that processes and analyzes data transmitted by the first server 104. The background management server may perform processing such as analysis on the received video data, and feed back a processing result (e.g., a face recognition result, an object detection result, a passenger flow analysis result, etc.) to the first server 104.
It should be noted that the processing method provided by the embodiment of the present disclosure may be generally executed by the server 104. Accordingly, the processing system provided by the disclosed embodiments may generally be disposed in the server 104. The processing method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 104 and is capable of communicating with the image capture device apparatuses 101, 102, 103 and/or the server 106. Accordingly, the processing system provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 104 and capable of communicating with the image capturing apparatuses 101, 102, 103 and/or the server 106.
For example, the first server 104 may receive video data collected by the camera devices 101, 102, and 103, process the video data to obtain one or more first images, and upload at least one of the first images to the second server 106 through the network 105.
It should be understood that the number of cameras, first servers, networks, and second servers in fig. 1 is merely illustrative. There may be any number of cameras, first servers, networks, and second servers, as desired for implementation.
Processing methods according to exemplary embodiments of the present disclosure are described below with reference to fig. 2-3 in conjunction with the system architecture of fig. 1. It should be noted that the above-described system architecture is shown merely for the purpose of facilitating understanding of the spirit and principles of the present disclosure, and embodiments of the present disclosure are not limited in any way in this respect. Rather, embodiments of the present disclosure may be applied to any system architecture where applicable.
FIG. 2 schematically shows a flow chart of a processing method according to an embodiment of the disclosure.
As shown in fig. 2, the method includes operations S201 to S202.
The processing method provided by the embodiment of the disclosure can be applied to a first server, and the first server can be, for example, an Inter GS63 mini-server.
In operation S201, video data acquired by at least one first camera is processed to obtain one or more first images.
According to the embodiment of the disclosure, the first camera can be, for example, a common camera with a camera shooting function, and the first camera can collect video data within a shooting range. The first server in the embodiment of the present disclosure may obtain the video data acquired by the first camera from the first camera, or the first camera may transmit the video data acquired by the first camera to the first server. In an embodiment of the present disclosure, the first server may be a local server closer to the first camera.
The first server processes the video data acquired by the first camera to obtain one or more first images, which may be processing video stream data and converting the video stream data into at least one image in a picture format.
According to the embodiment of the present disclosure, the processing may include, for example, decoding the video data to obtain at least one video frame, and encoding a specific video frame of the at least one video frame into the first image.
In the embodiment of the present disclosure, the video data obtained by the camera may be decoded by decoding software to obtain a plurality of video frames, for example, the video data may be decoded by an Intel Media SDK. Then, a corresponding video frame is selected from the decoded multiple video frames as a specific video frame according to requirements, and the selected specific video frame is further encoded or transcoded to generate a first image, wherein the first image may be in a jpeg format, for example. For example, a capture interface of an Inter Media SDK may be invoked to capture one or more first images from video data on demand.
In operation S202, at least one of the one or more first images is uploaded to a second server, wherein the first server is different from the second server.
According to the embodiment of the disclosure, the first server may be, for example, a local server closer to the first camera, and may be configured to process video data acquired by the first camera and upload a first image obtained after processing to the second server. The second server may be, for example, a remote server that is remote from the first camera, and may be configured to perform processing such as face recognition, object detection, and passenger flow analysis based on the received first image.
It can be understood that all the obtained first images may be uploaded to the second server, or a part of the first images may be uploaded to the second server as needed.
In the embodiment of the present disclosure, the obtained first image may also be stored in one buffer queue, and then the plurality of first images may be rotated or scaled by the plurality of processing threads as needed. The first image can also be uploaded to a cloud server for processing by different back-end services and algorithms. The first image to be saved may also be saved to a storage server.
According to the embodiment of the present disclosure, a second camera may be further included, and the second camera may be, for example, a customized camera, and the second camera may acquire, for example, image data, or the second camera may have, for example, an image processing function.
The embodiment of the disclosure further includes controlling at least one second camera to acquire one or more second images, and uploading the at least one second image to the second server. For example, the second camera may directly collect image data (e.g., take a picture), or the second camera may collect video data, but a microprocessor (e.g., an Inter Media SDK application may be installed) is installed in the second camera, and the video data collected by the second camera may be processed to obtain a second image, and then the second image is uploaded to the second server.
It can be understood that the second camera may directly upload the acquired or processed second image to the second server, or may first upload the second image to the first server, and then upload the second image to the second server after the first server performs scaling or rotation processing.
According to the embodiment of the disclosure, the video data is processed locally to obtain the corresponding image, and then the image is uploaded to the second server, so that the bandwidth occupied by the method is lower than that occupied by directly uploading the video data to the second server. For example, in scenes such as an intelligent store, an unmanned supermarket and the like, a plurality of first cameras acquire a large amount of video data, and if the video data are directly uploaded to a background server for processing, a relatively high bandwidth is occupied in the uploading process.
According to the embodiment of the disclosure, the video data is processed by the first server to obtain the corresponding image, and then the image is uploaded to the second server, and the image is analyzed by the second server. For example, in scenes such as an intelligent store, an unmanned supermarket and the like, a large amount of video data are acquired by the plurality of first cameras, if an image processor (e.g., a GPU) is locally arranged to directly analyze and process the video data (e.g., face recognition, passenger flow analysis and the like), higher cost is caused, resource sharing cannot be achieved, and large-scale deployment is not facilitated.
The embodiment of the disclosure adopts the first camera (for example, a common camera) and the second camera (for example, a customized camera) to work in coordination, so that the processing pressure of the first server can be reduced. For example, in a storage area of an unmanned supermarket, image information with a lower frequency can be acquired, and then the second camera can be placed in the storage area to acquire one image in a fixed time, so that the pressure of the first processor for processing video data is reduced.
Fig. 3 schematically shows a flow chart of a processing method according to another embodiment of the present disclosure.
As shown in fig. 3, the method includes operations S301 to S305.
In operation S301, a grab task is acquired from a third server.
According to an embodiment of the disclosure, the third server is different from the first server, but the third server may be in communication with the first server. The third server may be a cloud server and may have a cloud web management page, and the user may issue the grab-chart task through the cloud web management page.
The image capture task can be a task of respectively issuing image capture tasks for each camera according to requirements. For example, from the video data acquired by camera No. 1, a grab task is performed at a frequency of 6 first images per second. Or controlling the No. 2 camera to acquire 1 second image every minute.
And the first service acquires the current snapshot task issued in the third server from the third server. For example, the first server may obtain the snapshot task from the third server every predetermined time.
In operation S302, a grapple task is synchronized to a first server.
According to the embodiment of the disclosure, after the acquired grapple task is compared with the task which is executed locally, the task which needs to be added, deleted or modified locally is obtained, and then the local task list is updated according to the acquired grapple task.
In operation S303, the video data acquired by the corresponding first camera is processed according to the capture task to obtain a first image.
According to the embodiment of the disclosure, the image capture tasks corresponding to different first cameras can be the same or different. For example, if the capture task corresponding to camera No. 1 is 6 first images per second, the video data acquired by camera No. 1 is processed to obtain a plurality of video frames, and 6 video frames are selected from the plurality of video frames per second to be encoded or transcoded to generate 6 first images. And if the image capture task corresponding to the camera 3 is 1 first image per second, processing the video data acquired by the camera 3 to obtain a plurality of video frames, and selecting 1 video frame from the plurality of video frames in each second to encode or transcode to generate 1 first image.
In operation S304, at least one second camera is controlled to acquire at least one or more second images according to the grapple task.
According to the embodiment of the disclosure, the image turning tasks corresponding to different second cameras can be the same or different. For example, if the grapple task corresponding to camera No. 2 is 1 second image per minute, the second camera is controlled to acquire one second image per minute.
It can be appreciated that cameras in different locations require different frame rates. For example, in an unmanned supermarket scene, the storage area is not changed greatly, the frame rate required by the camera is low, the information amount of the goods purchase area is large, and the frame rate required by the camera is high. The video information may be acquired using a first camera where a high frame rate is required and then processed, and the image information may be acquired using a second camera where a low frame rate is required.
In operation S305, the execution state of the grapple task is uploaded to the third server.
According to the embodiment of the disclosure, the first server uploads the execution state of the chart task corresponding to each local camera to the third server at regular time, so that task synchronization of the first server and the third server is realized.
For example, a task may be marked as Ready state at the time of creation of the third server. And the task is synchronized to the first server, after the task is started and normally executed, the first server uploads the execution state to the third server, and the third server can mark the state of the third server as a Running state. If the start is Failed, after uploading to the third server, the third server may mark its state as Failed. Accordingly, the end of task execution may be marked as Finished state. And after the third server executes Stop operation, the corresponding task is switched to a Stopping state, after the first server is synchronously updated, the task in the Stopping state is Stopped and deleted and is uploaded to the third server, and the third server marks the task in the Stopped state.
The processing method provided by the embodiment of the disclosure can enable the user to directly issue the tasks on the third server, and the user can observe the execution condition of each task through the synchronization between the first server and the third server. For example, in an unmanned supermarket scene, the third server can distribute tasks to a plurality of cameras in a plurality of unmanned supermarkets at the same time, and monitor task completion states of the cameras at the same time, so that the operation is flexible, and the expandability is realized.
In the embodiment of the disclosure, each camera has its own grab picture task, and the first processor adopts different grab picture processing to different cameras according to the grab picture task of each camera, so that the processing to the cameras is more flexible, and the camera is more adaptive to diversified conditions, thereby avoiding the problem of high bandwidth occupation caused by directly uploading obtained video data to a background server together by all cameras.
Fig. 4 schematically shows a block diagram of a processing system 400 according to an embodiment of the present disclosure.
As shown in fig. 4, the processing system 400, including the processing module 410 and the first upload module 420, may perform the method described above with reference to fig. 2.
The processing module 410 processes video data acquired by at least one first camera to obtain one or more first images. According to the embodiment of the present disclosure, the processing module 410 may perform, for example, the operation S201 described above with reference to fig. 2, which is not described herein again.
The first upload module 420 uploads at least one of the one or more first images to a second server, wherein the first server is different from the second server. For example, video data is decoded to obtain at least one video frame, and a specific video frame of the at least one video frame is encoded into a first image. According to the embodiment of the present disclosure, the first upload module 420 may, for example, perform the operation S202 described above with reference to fig. 2, which is not described herein again.
Fig. 5 schematically shows a block diagram of a processing system 400 according to another embodiment of the disclosure.
As shown in fig. 5, the processing system 400 includes a processing module 410, a first upload module 420, a control module 430, a second upload module 440, an acquisition module 450, a synchronization module 460, and a third upload module 470. The processing module 410 and the first uploading module 420 are the same as or similar to the modules described above with reference to fig. 4, and are not described again here.
The control module 430 controls at least one second camera to acquire one or more second images.
The second upload module 440 uploads at least one of the one or more second images to the second server.
The obtaining module 450 obtains the grapple task from the third server. According to the embodiment of the present disclosure, the obtaining module 450 may perform, for example, the operation S301 described above with reference to fig. 3, which is not described herein again.
The synchronization module 460 synchronizes the grapple task to a first server, wherein the first server is different from the third server. According to the embodiment of the present disclosure, the synchronization module 460 may perform, for example, the operation S302 described above with reference to fig. 3, which is not described herein again.
The third upload module 470 uploads the execution status of the grapple task to the third server. According to an embodiment of the present disclosure, the third upload module 470 may, for example, perform operation S305 described above with reference to fig. 3, which is not described herein again.
It is understood that the processing module 410, the first uploading module 420, the control module 430, the second uploading module 440, the obtaining module 450, the synchronizing module 460 and the third uploading module 470 may be combined and implemented in one module, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present invention, at least one of the processing module 410, the first upload module 420, the control module 430, the second upload module 440, the acquisition module 450, the synchronization module 460, and the third upload module 470 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in a suitable combination of three implementations of software, hardware, and firmware. Alternatively, at least one of the processing module 410, the first upload module 420, the control module 430, the second upload module 440, the acquisition module 450, the synchronization module 460, and the third upload module 470 may be at least partially implemented as a computer program module that, when executed by a computer, may perform the functions of the respective modules.
FIG. 6 schematically shows a block diagram of a computer system suitable for implementing the processing method according to an embodiment of the disclosure. The computer system illustrated in FIG. 6 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 6, a computer system 600 according to an embodiment of the present disclosure includes a processor 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. Processor 601 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 601 may also include onboard memory for caching purposes. Processor 601 may include a single processing unit or multiple processing units for performing the different actions of the method flows described with reference to fig. 2-3 in accordance with embodiments of the present disclosure.
In the RAM 603, various programs and data necessary for the operation of the system 600 are stored. The processor 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. The processor 601 performs various operations of the processing method described above with reference to fig. 2 to 3 by executing programs in the ROM 602 and/or the RAM 603. It is to be noted that the programs may also be stored in one or more memories other than the ROM 602 and RAM 603. The processor 601 may also perform various operations of the processing methods described above with reference to fig. 2-3 by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, system 600 may also include an input/output (I/O) interface 605, input/output (I/O) interface 605 also connected to bus 604. The system 600 may also include one or more of the following components connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
According to an embodiment of the present disclosure, the method described above with reference to the flow chart may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program, when executed by the processor 601, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing. According to embodiments of the present disclosure, a computer-readable medium may include the ROM 602 and/or RAM 603 described above and/or one or more memories other than the ROM 602 and RAM 603.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As another aspect, the present disclosure also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to perform: the method comprises the steps of processing video data acquired by at least one first camera to obtain one or more first images, and uploading at least one of the one or more first images to a second server, wherein the first server is different from the second server.
According to an embodiment of the present disclosure, the processing the video data acquired by the at least one first camera to obtain one or more first images includes: and decoding the video data to obtain at least one video frame, and encoding a specific video frame in the at least one video frame into a first image.
According to an embodiment of the present disclosure, the method further includes: and controlling at least one second camera to acquire one or more second images, and uploading at least one of the one or more second images to the second server.
According to an embodiment of the present disclosure, the method further includes: the method comprises the steps of obtaining a grappling task from a third server, and synchronizing the grappling task into the first server, wherein the first server is different from the third server.
According to an embodiment of the present disclosure, the processing of the video data acquired by the at least one first camera includes: and processing the video data acquired by the corresponding at least one first camera according to the image capture task, wherein the image capture tasks corresponding to different first cameras are the same or different.
According to an embodiment of the present disclosure, the method further includes: and uploading the execution state of the grapple task to the third server.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (12)

1. A processing method applied to a first server, the method comprising:
processing video data acquired by at least one first camera to obtain one or more first images; and
uploading at least one of the one or more first images to a second server, wherein the first server is different from the second server, and the second server is configured to perform at least one of the following operations according to the received first image: face recognition, object detection and passenger flow analysis;
controlling at least one second camera to acquire one or more second images; and
uploading at least one of the one or more second images to the second server;
wherein the first camera is installed in goods purchase area, the second camera is installed in the storage area.
2. The method of claim 1, wherein the processing video data acquired by the at least one first camera to obtain one or more first images comprises:
decoding the video data to obtain at least one video frame;
encoding a particular video frame of the at least one video frame into a first image.
3. The method of claim 1, wherein:
the method further comprises the following steps:
acquiring a grab task from a third server;
synchronizing the grapple task into the first server, wherein the first server is different from the third server;
the processing of the video data acquired by the at least one first camera includes:
and processing the video data acquired by the corresponding at least one first camera according to the image capture task.
4. The method according to claim 3, wherein the grapple tasks corresponding to different first cameras are the same or different.
5. The method of claim 3, further comprising:
and uploading the execution state of the grapple task to the third server.
6. A processing system for application to a first server, the system comprising:
the processing module is used for processing the video data acquired by the at least one first camera to obtain one or more first images; and
a first upload module that uploads at least one of the one or more first images to a second server, wherein the first server is different from the second server; the second server is used for performing at least one of the following operations according to the received first image: face recognition, object detection and passenger flow analysis;
the control module is used for controlling at least one second camera to acquire one or more second images; and
a second uploading module for uploading at least one of the one or more second images to the second server;
wherein the first camera is installed in the goods purchasing area, and the second camera is installed in the storage area.
7. The system of claim 6, wherein the processing of the video data acquired by the at least one first camera to obtain one or more first images comprises:
decoding the video data to obtain at least one video frame;
encoding a particular video frame of the at least one video frame into a first image.
8. The system of claim 6, further comprising:
the acquisition module acquires the image capture task from the third server;
a synchronization module to synchronize the snapshot task into the first server, wherein the first server is different from the third server,
wherein, the processing the video data acquired by the at least one first camera comprises: and processing the video data acquired by the corresponding at least one first camera according to the image capture task.
9. The system of claim 8, wherein the grapple tasks corresponding to different first cameras are the same or different.
10. The system of claim 8, further comprising:
and the third uploading module uploads the execution state of the grapple task to the third server.
11. A processing system, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-5.
12. A computer readable medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 5.
CN201710998860.2A 2017-10-23 2017-10-23 Processing method and system Active CN109698850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710998860.2A CN109698850B (en) 2017-10-23 2017-10-23 Processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710998860.2A CN109698850B (en) 2017-10-23 2017-10-23 Processing method and system

Publications (2)

Publication Number Publication Date
CN109698850A CN109698850A (en) 2019-04-30
CN109698850B true CN109698850B (en) 2022-06-07

Family

ID=66226236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710998860.2A Active CN109698850B (en) 2017-10-23 2017-10-23 Processing method and system

Country Status (1)

Country Link
CN (1) CN109698850B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126940B (en) * 2019-11-22 2023-07-11 泰康保险集团股份有限公司 Service application processing method, device, equipment and computer readable storage medium
CN111010449B (en) * 2019-12-25 2022-08-02 医渡云(北京)技术有限公司 Image information output method, system, device, medium, and electronic apparatus
CN113596325B (en) * 2021-07-15 2023-05-05 盛景智能科技(嘉兴)有限公司 Method and device for capturing images, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2882156Y (en) * 2006-02-13 2007-03-21 中兴通讯股份有限公司 Image data collection system
CN102045544A (en) * 2010-11-10 2011-05-04 无锡中星微电子有限公司 Video monitoring system and video data transmission method thereof
CN102150429A (en) * 2008-09-11 2011-08-10 谷歌公司 System and method for video encoding using constructed reference frame
CN103595964A (en) * 2013-11-13 2014-02-19 龙迅半导体科技(合肥)有限公司 Data processing method and device
CN104378589A (en) * 2014-11-11 2015-02-25 深圳市视晶无线技术有限公司 Low-bit-rate video code transmission reconstruction method and system
CN104780390A (en) * 2015-04-07 2015-07-15 天脉聚源(北京)教育科技有限公司 Video processing method and device
CN105657380A (en) * 2016-03-22 2016-06-08 国网山东省电力公司章丘市供电公司 Substation image monitoring system
CN106060582A (en) * 2016-05-24 2016-10-26 广州华多网络科技有限公司 Video transmission system, video transmission method and video transmission apparatus
CN106559636A (en) * 2015-09-25 2017-04-05 中兴通讯股份有限公司 A kind of video communication method, apparatus and system
CN206117878U (en) * 2016-08-31 2017-04-19 浙江宇视科技有限公司 Intelligent video analysis device, equipment and video monitor system
CN106817354A (en) * 2015-12-01 2017-06-09 阿里巴巴集团控股有限公司 A kind of video stream transmission method, equipment and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201039640A (en) * 2009-04-27 2010-11-01 Tsai-Hung Lo Network monitoring system and method
CN103237076A (en) * 2013-04-26 2013-08-07 张路遥 Safety box monitoring system
CN107093356A (en) * 2017-05-23 2017-08-25 中国地质大学(武汉) A kind of multi-platform remote monitoring system comprehensive experimental device based on Web

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2882156Y (en) * 2006-02-13 2007-03-21 中兴通讯股份有限公司 Image data collection system
CN102150429A (en) * 2008-09-11 2011-08-10 谷歌公司 System and method for video encoding using constructed reference frame
CN102045544A (en) * 2010-11-10 2011-05-04 无锡中星微电子有限公司 Video monitoring system and video data transmission method thereof
CN103595964A (en) * 2013-11-13 2014-02-19 龙迅半导体科技(合肥)有限公司 Data processing method and device
CN104378589A (en) * 2014-11-11 2015-02-25 深圳市视晶无线技术有限公司 Low-bit-rate video code transmission reconstruction method and system
CN104780390A (en) * 2015-04-07 2015-07-15 天脉聚源(北京)教育科技有限公司 Video processing method and device
CN106559636A (en) * 2015-09-25 2017-04-05 中兴通讯股份有限公司 A kind of video communication method, apparatus and system
CN106817354A (en) * 2015-12-01 2017-06-09 阿里巴巴集团控股有限公司 A kind of video stream transmission method, equipment and system
CN105657380A (en) * 2016-03-22 2016-06-08 国网山东省电力公司章丘市供电公司 Substation image monitoring system
CN106060582A (en) * 2016-05-24 2016-10-26 广州华多网络科技有限公司 Video transmission system, video transmission method and video transmission apparatus
CN206117878U (en) * 2016-08-31 2017-04-19 浙江宇视科技有限公司 Intelligent video analysis device, equipment and video monitor system

Also Published As

Publication number Publication date
CN109698850A (en) 2019-04-30

Similar Documents

Publication Publication Date Title
US11983909B2 (en) Responding to machine learning requests from multiple clients
US20200358979A1 (en) System and method for supporting selective backtracking data recording
US11483370B2 (en) Preprocessing sensor data for machine learning
CN109698850B (en) Processing method and system
JP2020030811A (en) Method and device for determining response time
WO2015070694A1 (en) Screen splicing system and video data stream processing method
CN103974029A (en) Video monitoring method, video monitoring system and video monitoring device
CN112132022A (en) Face snapshot framework, face snapshot method, device, equipment and storage medium
JPWO2018037665A1 (en) INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, CONTROL METHOD, AND PROGRAM
CN114051120A (en) Video alarm method, device, storage medium and electronic equipment
CN116389492A (en) Video analysis system, method, apparatus, and computer-readable storage medium
CN115378937B (en) Distributed concurrency method, device, equipment and readable storage medium for tasks
CN114582111A (en) Unmanned aerial vehicle control method and device based on 5G network and electronic equipment
JP5769468B2 (en) Object detection system and object detection method
KR101591061B1 (en) Library Apparatus for Real-time Processing and Transmitting/Receiving Method thereof
KR20220088227A (en) Apparatus for Detecting Object Real Time of Multi Channel Video Stream
CN114401254B (en) Streaming media service processing method and device, electronic equipment and storage medium
US10904536B2 (en) Frame processing method and device
CN111711744B (en) Camera and image processing method
Peng et al. Tangram: High-resolution Video Analytics on Serverless Platform with SLO-aware Batching
CN110087145B (en) Method and apparatus for processing video
Riha et al. Real-time motion object tracking using GPU
Shin et al. Optimizing Ultra High-resolution Video Processing on Mobile Architecture with Massively Parallel Processing
CN105338381A (en) Instruction transmitting method and device
CN116684317A (en) Interactive service monitoring method based on signaling analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant