WO2023185646A1 - Systems and methods for image processing - Google Patents

Systems and methods for image processing Download PDF

Info

Publication number
WO2023185646A1
WO2023185646A1 PCT/CN2023/083479 CN2023083479W WO2023185646A1 WO 2023185646 A1 WO2023185646 A1 WO 2023185646A1 CN 2023083479 W CN2023083479 W CN 2023083479W WO 2023185646 A1 WO2023185646 A1 WO 2023185646A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing
objects
processing device
preset condition
image
Prior art date
Application number
PCT/CN2023/083479
Other languages
French (fr)
Inventor
Yu Zhou
Original Assignee
Zhejiang Dahua Technology Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co., Ltd. filed Critical Zhejiang Dahua Technology Co., Ltd.
Publication of WO2023185646A1 publication Critical patent/WO2023185646A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present disclosure generally relates to systems and methods for image processing, and more particularly, relates to systems and methods for privacy processing of objects in an image.
  • Video surveillance systems are widely used in a variety of applications to detect and monitor objects within an environment.
  • image data obtained by a video surveillance system includes various types of sensitive and private information of an object (e.g., a person) .
  • the sensitive and private information needs to be processed in order to protect personal privacy while ensuring the security of public environment.
  • the amount of the image data obtained by the video surveillance system may be relatively large, and a computing capacity of the video surveillance system may be limited. Therefore, it is desirable to provide effective systems or methods for image processing associated with privacy processing in the video surveillance system.
  • a system for image processing may include at least one storage medium including a set of instructions, and at least one processor in communication with the at least one storage medium.
  • the at least one processor may be directed to cause the system to perform a method.
  • the method may include obtaining an original image captured by a capture device.
  • the method may include identifying one or more objects in the original image by a first processing device.
  • the method may include performing a first processing operation on at least a first part of the one or more objects by the first processing device.
  • the method may include performing a second processing operation on at least a second part of the one or more objects by a second processing device.
  • the method may include determining whether a count of the one or more objects in the original image exceeds a computing capacity of the first processing device.
  • the method may include, in response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device, selecting, from the one or more objects, at least one first object that satisfies a first preset condition.
  • the method may include performing the first processing operation on at least part of the at least one first object that satisfies the first preset condition.
  • the method may include extracting at least one feature of each of the at least one first object that satisfies the first preset condition.
  • the method may include performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the at least one feature.
  • the at least one feature of each of the at least one first object may include at least one of an angle of the first object, an image quality of the first object, or a distance between the first object and the capture device.
  • the at least one feature of each of the at least one first object may include a type of the first object.
  • the method may include performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on at least one priority corresponding to the at least one feature respectively.
  • the method may include determining at least one weight corresponding to at least one feature respectively.
  • the method may include determining a weighted result based on the at least one weight corresponding to the at least one feature respectively.
  • the method may include performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the weighted result.
  • the method may include determining whether a difference between the count of the one or more objects in the original image and a count of processed objects is greater than a threshold based on a processing result of the first processing operation.
  • the method may include, in response to determining that the difference between the count of the one or more objects in the original image and the count of the processed objects is greater than the threshold, selecting, form the one or more objects, at least one second object that satisfies a second preset condition.
  • the method may include performing the second processing operation on the at least one second object that satisfies the second preset condition.
  • the first preset condition may be different from the second preset condition.
  • the method may include determining at least one unprocessed object that satisfies the first preset condition based on a processing result of the first processing operation.
  • the method may include performing the second processing operation on the at least one unprocessed object that satisfies the first preset condition.
  • the first processing device may be a processing device of the capture device, and the second processing device may be a back-end processing device.
  • the method may include obtaining a second image that is adjacently captured after the original image.
  • the method may include identifying one or more objects in the second image by the first processing device.
  • the method may include determining whether there is a static repeating object in the second image based on the one or more objects in the original image and the one or more objects in the second image.
  • the method may include, in response to determining that there is a static repeating object in the second image, determining that the static repeating object satisfies the first preset condition.
  • a method for image processing may be provided.
  • the method may include obtaining an original image captured by a capture device.
  • the method may include identifying one or more objects in the original image by a first processing device.
  • the method may include performing a first processing operation on at least a first part of the one or more objects by the first processing device.
  • the method may include performing a second processing operation on at least a second part of the one or more objects by a second processing device.
  • the method may include determining whether a count of the one or more objects in the original image exceeds a computing capacity of the first processing device.
  • the method may include, in response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device, selecting, from the one or more objects, at least one first object that satisfies a first preset condition.
  • the method may include performing the first processing operation on at least part of the at least one first object that satisfies the first preset condition.
  • the method may include extracting at least one feature of each of the at least one first object that satisfies the first preset condition.
  • the method may include performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the at least one feature.
  • the at least one feature of each of the at least one first object may include at least one of an angle of the first object, an image quality of the first object, or a distance between the first object and the capture device.
  • the at least one feature of each of the at least one first object may include a type of the first object.
  • the method may include performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on at least one priority corresponding to the at least one feature respectively.
  • the method may include determining at least one weight corresponding to at least one feature respectively.
  • the method may include determining a weighted result based on the at least one weight corresponding to the at least one feature respectively.
  • the method may include performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the weighted result.
  • the method may include determining whether a difference between the count of the one or more objects in the original image and a count of processed objects is greater than a threshold based on a processing result of the first processing operation.
  • the method may include, in response to determining that the difference between the count of the one or more objects in the original image and the count of the processed objects is greater than the threshold, selecting, form the one or more objects, at least one second object that satisfies a second preset condition.
  • the method may include performing the second processing operation on the at least one second object that satisfies the second preset condition.
  • the first preset condition may be different from the second preset condition.
  • the method may include determining at least one unprocessed object that satisfies the first preset condition based on a processing result of the first processing operation.
  • the method may include performing the second processing operation on the at least one unprocessed object that satisfies the first preset condition.
  • the first processing device may be a processing device of the capture device, and the second processing device may be a back-end processing device.
  • the method may include obtaining a second image that is adjacently captured after the original image.
  • the method may include identifying one or more objects in the second image by the first processing device.
  • the method may include determining whether there is a static repeating object in the second image based on the one or more objects in the original image and the one or more objects in the second image.
  • the method may include, in response to determining that there is a static repeating object in the second image, determining that the static repeating object satisfies the first preset condition.
  • a non-transitory computer readable medium may include executable instructions that, when executed by at least one processor, direct the at least one processor to perform a method.
  • the method may include obtaining an original image captured by a capture device.
  • the method may include identifying one or more objects in the original image by a first processing device.
  • the method may include performing a first processing operation on at least a first part of the one or more objects by the first processing device.
  • the method may include performing a second processing operation on at least a second part of the one or more objects by a second processing device.
  • FIG. 1 is a schematic diagram illustrating an exemplary image processing system according to some embodiments of the present disclosure
  • FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure
  • FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure
  • FIG. 4 is a block diagram illustrating an exemplary first processing device according to some embodiments of the present disclosure
  • FIG. 5 is a block diagram illustrating an exemplary second processing device according to some embodiments of the present disclosure.
  • FIG. 6 is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure
  • FIG. 7A is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure
  • FIG. 7B is a schematic diagram illustrating an exemplary process for performing a first processing operation on a plurality of objects based on priorities corresponding to features respectively according to some embodiments of the present disclosure
  • FIG. 7C is a schematic diagram illustrating an exemplary process for performing a first processing operation on a plurality of objects based on a weighted result for each of the plurality of objects according to some embodiments of the present disclosure
  • FIG. 8A is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure
  • FIG. 8B is a schematic diagram illustrating an exemplary static repeating object according to some embodiments of the present disclosure.
  • FIG. 9 is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure.
  • FIG. 10 is a schematic diagram illustrating an exemplary processing result of a first processing operation according to some embodiments of the present disclosure.
  • FIG. 11 is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure.
  • system, ” “engine, ” “unit, ” “module, ” and/or “block” used herein are one method to distinguish different components, elements, parts, sections, or assembly of different level in ascending order. However, the terms may be displaced by another expression if they achieve the same purpose.
  • An aspect of the present disclosure relates to systems and methods for image processing.
  • an original image captured by a capture device may be obtained.
  • One or more objects may be identified in the original image by a first processing device (e.g., a processing device of the capture device) .
  • a first processing operation e.g., a masking operation, a coding operation, a blurring operation, a cutting operation
  • a second processing operation (e.g., a masking operation, a coding operation, a blurring operation, a cutting operation) may be performed on at least a second part of the one or more objects by a second processing device (e.g., a back-end processing device) .
  • a second processing device e.g., a back-end processing device
  • the first processing device may perform the first processing operation on a part of the identified object (s) in the original image, and the second processing device may perform the second processing operation on the other part of the identified object (s) in the original image, which can improve the efficiency of image processing, and ensure the effective protection of sensitive and private information associated with the original image.
  • FIG. 1 is a schematic diagram illustrating an exemplary image processing system according to some embodiments of the present disclosure.
  • the image processing system 100 may include a first processing device 110, a capture device 120, a terminal device 130, a storage device 140, a network 150, and a second processing device 160.
  • the components of the image processing system 100 may be connected to each other in one or more of various ways.
  • the capture device 120 may be connected to the first processing device 110 through the network 150, or connected to the first processing device 110 directly as illustrated by the bidirectional dotted arrow connecting the capture device 120 and the first processing device 110 in FIG. 1.
  • the capture device 120 may be connected to the storage device 140 through the network 150, or connected to the storage device 140 directly as illustrated by the bidirectional dotted arrow connecting the capture device 120 and the storage device 140 in FIG. 1.
  • the terminal device 130 may be connected to the storage device 140 through the network 150, or connected to the storage device 140 directly as illustrated by the bidirectional dotted arrow connecting the terminal device 130 and the storage device 140 in FIG. 1.
  • the terminal device 130 may be connected to the second processing device 160 through the network 150, or connected to the second processing device 160 directly as illustrated by the bidirectional dotted arrow connecting the terminal device 130 and the second processing device 160 in FIG. 1.
  • the first processing device 110 may be connected to the second processing device 160 through the network 150, or connected to the second processing device 160 directly as illustrated by the bidirectional dotted arrow connecting the first processing device 110 and the second processing device 160 in FIG. 1.
  • the first processing device 110 may process information and/or data to perform one or more functions described in the present disclosure. For example, first processing device 110 may obtain an original image captured by the capture device 120. As another example, the first processing device 110 may identify one or more objects in an original image. As still another example, the first processing device 110 may perform a first processing operation on at least a first part of one or more objects. As still another example, the first processing device 110 may transmit a processing result of a first processing operation to the second processing device 160. In some embodiments, the first processing device 110 may include one or more processing engines (e.g., single-core processing engine (s) or multi-core processor (s) ) . In some embodiments, the first processing device 110 may be a front-end inter-process communication (IPC) device.
  • IPC inter-process communication
  • the first processing device 110 may include a central processing unit (CPU) , an application-specific integrated circuit (ASIC) , an application-specific instruction-set processor (ASIP) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a digital signal processor (DSP) , a field programmable gate array (FPGA) , a programmable logic device (PLD) , a controller, a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • ASIP application-specific instruction-set processor
  • GPU graphics processing unit
  • PPU physics processing unit
  • DSP digital signal processor
  • FPGA field programmable gate array
  • PLD programmable logic device
  • controller a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
  • RISC reduced
  • the first processing device 110 may be connected to the network 150 to communicate with one or more components (e.g., the capture device 120, the terminal device 130, the storage device 140, and/or the second processing device 160) of the image processing system 100. In some embodiments, the first processing device 110 may be directly connected to or communicate with one or more components (e.g., the capture device 120, the terminal device 130, the storage device 140, and/or the second processing device 160) of the image processing system 100. In some embodiments, the first processing device 110 may be part of the capture device 120.
  • the capture device 120 may be configured to capture image data (e.g., an original image) of an object.
  • the capture device 120 may be and/or include any suitable device that is capable of capturing image data of the object.
  • the capture device 120 may include a spherical camera, a hemispherical camera, a rifle camera, etc.
  • the capture device 120 may include a black-white camera, a color camera, an infrared camera, an X-ray camera, etc.
  • the capture device 120 may include a digital camera, an analog camera, etc.
  • the capture device 120 may include a monocular camera, a binocular camera, a multi-camera, etc.
  • the capture device 120 may be a network video recorder (NVR) , a X video recorder (XVR) , etc.
  • the capture device 120 may be an IP camera which can transmit the captured image data to any component (e.g., the first processing device 110, the terminal device 130, the storage device 140, the second processing device 160) of the image processing system 100 via the network 150.
  • the capture device 120 may be a camera with intelligent detection functions.
  • the capture device 120 may include a processing device (e.g., the first processing device 110) configured to process the captured image data (e.g., identify one or more objects in the original image, perform a first processing operation on at least a part of the one or more objects in the original image) .
  • a processing device e.g., the first processing device 110
  • process the captured image data e.g., identify one or more objects in the original image, perform a first processing operation on at least a part of the one or more objects in the original image
  • the image data acquired by the capture device 120 may be transmitted to the first processing device 110 and/or the second processing device 160 for further analysis. Additionally or alternatively, the image data acquired by the capture device 120 may be transmitted to a terminal device (e.g., the terminal device 130) for display and/or a storage device (e.g., the storage device 140) for storage.
  • a terminal device e.g., the terminal device 130
  • a storage device e.g., the storage device 140
  • the capture device 120 may be configured to capture the image data of the object continuously or intermittently (e.g., periodically) .
  • the acquisition of the image data by the capture device 120, the transmission of the captured image data to the first processing device 110 (or the second processing device 160) , and the analysis of the image data may be performed substantially in real time so that the image data may provide information indicating a substantially real time status of the object.
  • the terminal devices 130 may include a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, a telephone 130-4, or the like, or any combination thereof.
  • the mobile device 130-1 may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof.
  • the storage device 140 may store data and/or instructions.
  • the storage device 140 may store data obtained from the first processing device 110, the capture device 120, the terminal device 130, the second processing device 160, and/or an external storage device.
  • the storage device 140 may store an original image obtained from the capture device 120.
  • the storage device 140 may store one or more identified objects in an original image determined by the first processing device 110.
  • the storage device 140 may store a processing result of a first processing operation determined by the first processing device 110.
  • the storage device 140 may store data and/or instructions that the first processing device 110 and/or the second processing device 160 may execute or use to perform exemplary methods described in the present disclosure.
  • the storage device 140 may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof.
  • the storage device 140 may be connected to the network 150 to communicate with one or more components (e.g., the first processing device 110, the capture device 120, the terminal device 130, the second processing device 160) of the image processing system 100.
  • One or more components of the image processing system 100 may access the data or instructions stored in the storage device 140 via the network 150.
  • the storage device 140 may be directly connected to or communicate with one or more components (e.g., the first processing device 110, the capture device 120, the terminal device 130, the second processing device 160) of the image processing system 100.
  • the storage device 140 may be part of the capture device 120.
  • the network 150 may facilitate exchange of information and/or data.
  • one or more components e.g., the first processing device 110, the capture device 120, the terminal device 130, the storage device 140, the second processing device 160
  • the first processing device 110 may obtain/acquire an original image from the capture device 120 via the network 150.
  • the second processing device 160 may obtain a processing result of a first processing operation from the first processing device 110 via the network 150.
  • the network 150 may be any type of wired or wireless network, or combination thereof.
  • the network 150 may include one or more network access points.
  • the network 150 may include wired or wireless network access points (e.g., 150-1, 150-2) , through which one or more components of the image processing system 100 may be connected to the network 150 to exchange data and/or information.
  • the second processing device 160 may process information and/or data to perform one or more functions described in the present disclosure. For example, second processing device 160 may obtain a processing result of a first processing operation from the first processing device 110. As another example, the second processing device 160 may determine whether a difference between a count of one or more objects in an original image and a count of processed objects is greater than a threshold based on a processing result of a first processing operation; in response to determining that the difference between the count of the one or more objects in the original image and the count of the processed objects is greater than the threshold, the second processing device 160 may select, form one or more objects, at least one second object that satisfies a second preset condition; and may perform a second processing operation on the at least one second object that satisfies the second preset condition.
  • the second processing device 160 may include one or more processing engines (e.g., single-core processing engine (s) or multi-core processor (s) ) .
  • the second processing device 160 may include a central processing unit (CPU) , an application-specific integrated circuit (ASIC) , an application-specific instruction-set processor (ASIP) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a digital signal processor (DSP) , a field programmable gate array (FPGA) , a programmable logic device (PLD) , a controller, a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • ASIP application-specific instruction-set processor
  • GPU graphics processing unit
  • PPU physics processing unit
  • DSP digital signal processor
  • FPGA field programmable gate array
  • PLD programmable logic device
  • controller
  • the second processing device 160 may be connected to the network 150 to communicate with one or more components (e.g., the first processing device 110, the capture device 120, the terminal device 130, the storage device 140) of the image processing system 100. In some embodiments, the second processing device 160 may be directly connected to or communicate with one or more components (e.g., the first processing device 110, the capture device 120, the terminal device 130, the storage device 140) of the image processing system 100. In some embodiments, the second processing device 160 may be a back-end processing device. For example, the second processing device 160 may be a back-end network video recorder (NVR) . In some embodiments, the second processing device 160 may be part of the storage device 140. In some embodiments, a computing capacity of the second processing device 160 may be higher than a computing capacity of the first processing device 110.
  • NVR network video recorder
  • the image processing system 100 is merely provided for the purposes of illustration, and is not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations or modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure.
  • the image processing system 100 may further include a database, an information source, etc.
  • the image processing system 100 may be implemented on other devices to realize similar or different functions.
  • FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure.
  • the first processing device 110, the second processing device 160, and/or the terminal device 130 may be implemented on the computing device 200.
  • the computing device 200 may be used to implement any component of the image processing system 100 as described herein.
  • the first processing device 110 may be implemented on the computing device 200, via its hardware, software program, firmware, or a combination thereof.
  • the computer functions relating to the image processing as described herein may be implemented in a distributed fashion on a number of similar platforms to distribute the processing load.
  • the computing device 200 may include communication (COM) ports 250 connected to and from a network connected thereto to facilitate data communications.
  • the computing device 200 may also include a processor 220, in the form of one or more logic circuits, for executing program instructions.
  • the processor 220 may include interface circuits and processing circuits therein.
  • the interface circuits may be configured to receive electronic signals from a bus 210, wherein the electronic signals encode structured data and/or instructions for the processing circuits to process.
  • the processing circuits may conduct logic calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus 210.
  • the computing device 200 may further include program storage and data storage of different forms including, for example, a disk 270, a read only memory (ROM) 230, or a random-access memory (RAM) 240, for storing various data files to be processed and/or transmitted by the computing device 200.
  • the computing device 200 may also include program instructions stored in the ROM 230, RAM 240, and/or another type of non-transitory storage medium to be executed by the processor 220.
  • the methods and/or processes of the present disclosure may be implemented as the program instructions.
  • the computing device 200 may also include an I/O component 260, supporting input/output between the computer and other components.
  • the computing device 200 may also receive programming and data via network communications.
  • processors are also contemplated, thus operations and/or steps performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors.
  • the processor of the computing device 200 executes both operation A and operation B, it should be understood that operation A and operation B may also be performed by two different CPUs and/or processors jointly or separately in the computing device 200 (e.g., the first processor executes operation A and the second processor executes operation B, or the first and second processors jointly execute operations A and B) .
  • FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure.
  • the terminal device 130, the first processing device 110, and/or the second processing device 160 may be implemented on a mobile device 300.
  • the mobile device 300 may include a communication unit 310, a display 320, a graphic processing unit (GPU) 330, a central processing unit (CPU) 340, an I/O 350, a memory 360, a mobile operating system (OS) 370, and a storage 390.
  • a communication unit 310 may include a communication unit 310, a display 320, a graphic processing unit (GPU) 330, a central processing unit (CPU) 340, an I/O 350, a memory 360, a mobile operating system (OS) 370, and a storage 390.
  • GPU graphic processing unit
  • CPU central processing unit
  • I/O 350 I/O 350
  • memory 360 a central processing unit
  • OS mobile operating system
  • storage 390 any other suitable component, including but not limited to a system bus or a controller (not shown) , may also be included in the mobile device 300.
  • any other suitable component including but not limited to a system bus or a controller (not shown) , may also be included in
  • the mobile operating system 370 e.g., iOSTM, AndroidTM, Windows PhoneTM, Harmony OS
  • one or more applications 380 may be loaded into the memory 360 from the storage 390 in order to be executed by the CPU 340.
  • the applications 380 may include a browser or any other suitable mobile app for receiving and rendering information relating to image processing or other information from the image processing system 100. User interactions with the information stream may be achieved via the I/O 350 and provided to the first processing device 110 and/or other components of the image processing system 100 via the network 150.
  • computer hardware platforms may be used as the hardware platform (s) for one or more of the elements described herein.
  • a computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device.
  • PC personal computer
  • a computer may also act as a server if appropriately programmed.
  • FIG. 4 is a block diagram illustrating an exemplary first processing device according to some embodiments of the present disclosure.
  • the first processing device 110 may include an obtaining module 410, an identification module 420, a processing module 430, and a transmitting module 440.
  • the obtaining module 410 may be configured to obtain data and/or information associated with the image processing system 100.
  • the data and/or information associated with the image processing system 100 may include an original image, a second image, a processed image, one or more objects identified in the original image, one or more objects identified in the second image, a first processing operation, a second processing operation, a first preset condition, a second preset condition, a feature of an object, a priority corresponding to the feature of the object, a weight corresponding to the feature of the object, a static repeating object in the second image, or the like, or any combination thereof.
  • the obtaining module 410 may obtain an original image captured by a capture device.
  • the obtaining module 410 may obtain a second image that is adjacently captured after an original image. More descriptions for obtaining the second image may be found elsewhere in the present disclosure (e.g., operation 810 in FIG. 8A and descriptions thereof) .
  • the obtaining module 410 may obtain the data and/or information associated with the image processing system 100 from one or more components (e.g., the capture device 120, the terminal device 130, the storage device 140, the second processing device 160) of the image processing system 100 via the network 150.
  • the identification module 420 may be configured to identify one or more objects in an image. In some embodiments, the identification module 420 may identify one or more objects in an original image. In some embodiments, the identification module 420 may identify one or more objects in a second image. In some embodiments, the identification module 420 may identify the one or more objects in the original image or the second image based on an object detection algorithm (e.g., an inter-frame difference algorithm, a background difference algorithm, an optical flow algorithm) . More descriptions for identifying the one or more objects in the image may be found elsewhere in the present disclosure (e.g., operation 620 in FIG. 6, operation 820 in FIG. 8A, and descriptions thereof) .
  • an object detection algorithm e.g., an inter-frame difference algorithm, a background difference algorithm, an optical flow algorithm
  • the processing module 430 may be configured to process data and/or information associated with the image processing system 100. In some embodiments, the processing module 430 may perform a first processing operation on at least a first part of one or more objects in an original image. For example, the processing module 430 may determine whether a count of the one or more objects in the original image exceeds a computing capacity of the first processing device 110. In response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110, the processing module 430 may select, from the one or more objects, at least one first object that satisfies a first preset condition. The processing module 430 may perform a first processing operation on at least part of the at least one first object that satisfies the first preset condition.
  • the processing module 430 may perform the first processing operation on the one or more objects in the original image. More descriptions for performing the first processing operation on the at least the first part of the one or more objects may be found elsewhere in the present disclosure (e.g., operation 630 in FIG. 6, operations 730-760 in FIG. 7A, and descriptions thereof) .
  • the processing module 430 may determine whether there is a static repeating object in a second image based on one or more objects in an original image and one or more objects in the second image. In response to determining that there is a static repeating object in the second image, the processing module 430 may determine that the static repeating object satisfies a first preset condition. More descriptions for determining whether there is a static repeating object in a second image may be found elsewhere in the present disclosure (e.g., operations 830-840 in FIG. 8A and descriptions thereof) .
  • the transmitting module 440 may be configured to transmit data and/or information associated with the image processing system 100. In some embodiments, the transmitting module 440 may transmit a processing result of a first processing operation to the second processing device 160. More descriptions for transmit the processing result of the first processing operation to the second processing device 160 may be found elsewhere in the present disclosure (e.g., operation 770 in FIG. 7A and descriptions thereof) .
  • one or more modules may be combined into a single module.
  • the identification module 420 and the processing module 430 may be combined as a single module which may both identify one or more objects in an original image, and perform a first processing operation on at least a first part of the one or more objects.
  • one or more modules may be added.
  • the first processing device 110 may further include a storage module (not shown) used to store information and/or data (e.g., an original image, one or more identified objects) associated with the image processing system 100.
  • a storage module used to store information and/or data (e.g., an original image, one or more identified objects) associated with the image processing system 100.
  • one or more modules may be omitted.
  • the transmitting module 440 may be omitted.
  • FIG. 5 is a block diagram illustrating an exemplary second processing device according to some embodiments of the present disclosure.
  • the second processing device 160 may include an obtaining module 510, an identification module 520, a determination module 530, a processing module 540, and a storage module 550.
  • the obtaining module 510 may be configured to obtain data and/or information associated with the image processing system 100. For example, the obtaining module 510 may obtain a processing result of a first processing operation from the first processing device 110. More descriptions for obtaining the processing result of the first processing operation may be found elsewhere in the present disclosure (e.g., operation 910 in FIG. 9 and descriptions thereof) . In some embodiments, the obtaining module 510 may obtain the data and/or information associated with the image processing system 100 from one or more components (e.g., the first processing device 110, the capture device 120, the terminal device 130, the storage device 140) of the image processing system 100 via the network 150.
  • the first processing device 110 the capture device 120, the terminal device 130, the storage device 140
  • the identification module 520 may be configured to identify one or more objects in an image. In some embodiments, the identification module 520 may also be configured to, in response to determining that a difference between a count of one or more objects in an original image and a count of processed objects is greater than a threshold, select, form the one or more objects in the original image, at least one second object that satisfies a second preset condition. More descriptions for selecting the at least one second object that satisfies the second preset condition may be found elsewhere in the present disclosure (e.g., operation 930 in FIG. 9 and descriptions thereof) .
  • the determination module 530 may be configured to determine data and/or information associated with the image processing system 100. In some embodiments, the determination module 530 may determine whether a difference between a count of one or more objects in an original image and a count of processed objects is greater than a threshold based on a processing result of a first processing operation. More descriptions for determining whether the difference between the count of the one or more objects in the original image and the count of processed objects is greater than the threshold may be found elsewhere in the present disclosure (e.g., operation 920 in FIG. 9 and descriptions thereof) .
  • the processing module 540 may be configured to perform a second processing operation on at least a second part of one or more objects in an original image.
  • the processing module 540 may perform a second processing operation on at least one second object that satisfies a second preset condition. More descriptions for performing the second processing operation on the at least one second object that satisfies the second preset condition may be found elsewhere in the present disclosure (e.g., operation 940 in FIG. 9 and descriptions thereof) .
  • the second processing device 160 may determine at least one unprocessed object that satisfies a first preset condition based on a processing result of a first processing operation.
  • the processing module 540 may perform a second processing operation on the at least one unprocessed object that satisfies the first preset condition. More descriptions for performing the second processing operation on the at least one unprocessed object that satisfies the first preset condition may be found elsewhere in the present disclosure (e.g., operation 950 in FIG. 9 and descriptions thereof) .
  • the storage module 550 may be configured to store data and/or information associated with the image processing system 100.
  • the storage module 550 may store an original image, a second image, a processed image, one or more objects identified in the original image, one or more objects identified in the second image, a first processing operation, a second processing operation, a first preset condition, a second preset condition, a feature of an object, a priority corresponding to the feature of the object, a weight corresponding to the feature of the object, a static repeating object in the second image, or the like, or any combination thereof.
  • one or more modules may be combined into a single module.
  • the identification module 520 and the determination module 530 may be combined as a single module.
  • one or more modules may be omitted.
  • the storage module 550 may be omitted.
  • one or more modules may be added.
  • the second processing device 160 may further include a preview module (not shown) used to preview information and/or data (e.g., an original image, one or more identified objects in the original image, a processing result of a first processing operation) associated with the image processing system 100.
  • a user of the image processing system 100 may preview the processing result of the first processing operation via the preview module, and adjust the first processing operation based on a preview result.
  • the second processing device 160 may further include a playback module (not shown) used to playback information and/or data (e.g., an original image, a processing result of a first processing operation) stored in a storage device (e.g., the storage device 130, the storage module 550) .
  • a storage device e.g., the storage device 130, the storage module 550
  • FIG. 6 is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure.
  • the process 600 may be executed by the image processing system 100.
  • the process 600 may be implemented as a set of instructions stored in the storage 390.
  • the processor 220 and/or the module (s) in FIG. 4 and FIG. 5 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the module (s) may be configured to perform the process 600.
  • the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 600 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 600 as illustrated in FIG. 6 and described below is not intended to be limiting.
  • the first processing device 110 may obtain an original image captured by a capture device.
  • the original image may be a static image, a series of image frames, a video, etc.
  • the original image may be a two-dimensional image, a three-dimensional image, a four-dimensional image, etc.
  • the original image may further include voice information associated with the original image.
  • the first processing device 110 may obtain the original image from the capture device (e.g., the capture device 120) periodically (e.g., per second, per 2 seconds, per 5 seconds, per 10 seconds) or in real time.
  • the capture device 120 may transmit the original image to a storage device (e.g., the storage device 140) periodically (e.g., per second, per 2 seconds, per 5 seconds, per 10 seconds) or in real time via the network 150.
  • the first processing device 110 may access the storage device and retrieve the original image.
  • the first processing device 110 may be a processing device of the capture device.
  • the original image may be an original video stream captured by the capture device.
  • the capture device may obtain the original video stream, and process the original video code stream into multiple original images frame by frame.
  • the first processing device 110 may identify one or more objects in the original image.
  • the one or more objects may refer to information associated with privacy that needs to be processed (e.g., masked, coded, blurred, cut) in the original image.
  • the one or more objects may include a person, an animal, a facility, or a part thereof, etc.
  • the one or more objects may include face information (e.g., a face portion of a person) , identification information (e.g., a name, an ID number) , working information (e.g., an occupation, a working address) , or the like, or any combination thereof.
  • the first processing device 110 may detect the one or more objects in the original image according to an object detection algorithm (e.g., an inter-frame difference algorithm, a background difference algorithm, an optical flow algorithm) .
  • an object detection algorithm e.g., an inter-frame difference algorithm, a background difference algorithm, an optical flow algorithm
  • the one or more objects may include a face of a person.
  • the first processing device 110 may detect the face in the original image according to one or more face detection algorithms.
  • face detection or recognition algorithms may include a knowledge-based technique, a feature-based technique, a template matching technique, an eigenface-based technique, a distribution-based technique, a neural-network based technique, a support vector machine (SVM) based technique, a sparse network of winnows (SNoW) based technique, a naive bayes classifier, a hidden markov model, an information theoretical algorithm, an inductive learning technique, or the like, or any combination thereof.
  • SVM support vector machine
  • SNoW sparse network of winnows
  • the one or more objects may include identification information and/or working information of a person.
  • the first processing device 110 may recognize the identification information and/or the working information in the original image according to one or more text recognition algorithms.
  • Exemplary text recognition algorithms may include a template algorithm, an indicative algorithm, a structural recognition algorithm, an artificial neural network, or the like, or any combination thereof.
  • the first processing device 110 may perform a first processing operation on at least a first part of the one or more objects.
  • the first processing operation may be an image processing operation.
  • the first processing operation may include a masking operation, a coding operation, a blurring operation, a cutting operation, or the like, or any combination thereof.
  • the first processing device 110 may perform the first processing operation on the at least the first part of the one or more objects by adding a letter, a number, a pattern, or the like, or any combination thereof, on the at least the first part of the one or more objects.
  • the first processing device 110 may perform the first processing operation on the at least the first part of the one or more objects by replacing the at least the first part of the one or more objects with a letter, a number, a pattern, or the like, or any combination thereof.
  • the first processing device 110 may perform the first processing operation on the at least the first part of the one or more objects by coding or blurring the at least the first part of the one or more objects. In some embodiments, the first processing device 110 may perform the first processing operation on the at least the first part of the one or more objects by cutting the at least the first part of the one or more objects.
  • the one or more objects may include one or more human faces.
  • the first processing device 110 may process at least one human face (or a portion thereof (e.g., an area including eyes) ) by replacing the at least one human face or the portion thereof with a default pattern or a random pattern, or process the at least one human face (or a portion thereof (e.g., an area including eyes) ) by blurring or cutting the at least one human face in the original image.
  • the one or more objects may include identification information (e.g., an ID number) of a person.
  • the identification information may include a plurality of characters (e.g., a mark, a sign, a symbol, a letter, a Chinese character) .
  • the first processing device 110 may process (e.g., modify) one or more characters of the plurality of characters of the identification information. Specifically, the first processing device 110 may replace one or more characters of the plurality of characters of the identification information with default value (s) or random value (s) , or code (or blur) the one or more characters of the plurality of characters of the identification information.
  • the first processing device 110 may select the first part of the one or more objects (e.g., at least one first object) , and perform the first processing operation on the first part of the one or more objects. For example, the first processing device 110 may determine whether a count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110.
  • the first processing device 110 may select, from the one or more objects, at least one first object that satisfies a first preset condition. Then the first processing device 110 may perform the first processing operation on at least part of the at least one first object that satisfies the first preset condition. In response to determining that the count of the one or more objects in the original image does not exceed the computing capacity of the first processing device 110, the first processing device 110 may perform the first processing operation on the one or more objects in the original image. More descriptions for performing the first processing operation may be found elsewhere in the present disclosure (e.g., FIG. 7A and descriptions thereof) .
  • the first processing device 110 may generate a processing result of the first processing operation, and transmit the processing result of the first processing operation to the second processing device 160.
  • the processing result of the first processing operation may include a processed image.
  • the processed image may include processed object (s) and unprocessed object (s) in the original image.
  • the processing result of the first processing operation may include information associated with object (s) that do not satisfy the first preset condition, information associated with processed object (s) that satisfy the first preset condition, information associated with unprocessed object (s) that satisfy the first preset condition, or the like, or any combination thereof. More description of the processing result of the first processing operation may be found elsewhere in the present disclosure (e.g., FIG. 7A and descriptions thereof) .
  • the second processing device 160 may perform a second processing operation on at least a second part of the one or more objects.
  • the second processing operation may be the same as or different from the first processing operation.
  • the second processing operation may include a masking operation, a coding operation, a blurring operation, a cutting operation, or the like, or any combination thereof.
  • the second processing device 160 may obtain the processing result of the first processing operation from the first processing device 110.
  • the second processing device 160 may determine at least one unprocessed object that satisfies the first preset condition based on the processing result of the first processing operation.
  • the second processing device 160 may perform the second processing operation on the at least one unprocessed object that satisfies the first preset condition.
  • the second processing device 160 may determine whether a difference between the count of the one or more objects in the original image and a count of processed objects is greater than a threshold based on the processing result of the first processing operation. In response to determining that the difference between the count of the one or more objects in the original image and the count of the processed objects is greater than the threshold, the second processing device 160 may select, form the one or more objects, at least one second object that satisfies a second preset condition. Then the second processing device 160 may perform the second processing operation on the at least one second object that satisfies the second preset condition.
  • FIG. 7A is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure.
  • the process 700 may be executed by the image processing system 100.
  • the process 700 may be implemented as a set of instructions stored in the storage 390.
  • the processor 220 and/or the module (s) in FIG. 4 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the module (s) may be configured to perform the process 700.
  • the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 700 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 700 as illustrated in FIG. 7A and described below is not intended to be limiting.
  • the first processing device 110 may obtain an original image captured by a capture device.
  • Operation 710 may be performed in a similar manner as operation 610 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
  • the first processing device 110 may identify one or more objects in the original image.
  • Operation 720 may be performed in a similar manner as operation 620 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
  • the first processing device 110 may determine whether a count of the one or more objects in the original image exceeds a computing capacity of the first processing device 110.
  • the first processing device 110 may determine a maximum count of objects that the first processing device 110 is able to process (e.g., mask) based on the computing capacity of the first processing device 110.
  • the first processing device 110 may determine whether the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110 by determining whether the count of the one or more objects in the original image exceeds the maximum count of objects that the first processing device 110 is able to process.
  • the first processing device 110 may determine that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110.
  • the first processing device 110 may determine that the count of the one or more objects in the original image does not exceed the computing capacity of the first processing device 110.
  • process 700 may proceed to operation 740.
  • process 700 may proceed to operation 760.
  • the first processing device 110 in response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110, the first processing device 110 (e.g., the processing module 430) may select, from the one or more objects, at least one first object that satisfies a first preset condition.
  • the first processing device 110 may extract feature (s) of each of the one or more objects.
  • the feature of the object may include an angle of the object, an image quality of the object, a distance between the object and the capture device, a type of the object, or the like, or any combination thereof.
  • an angle of an object refers to a deflection angle between a surface of the object (e.g., a front surface of a human body of a person) and an image plane of an image (e.g., the original image) .
  • the deflection angle may be represented by a deflection direction (e.g., an upward deflection, a downward deflection, a left deflection, a right deflection) and a deflection angle value (e.g., 20 degrees, 60 degrees) .
  • the angle of the object may be deflecting 20 degrees in the upward deflection, deflecting 65 degrees in the right deflection, etc.
  • an image quality of an object refers to a clarity of the object in an image (e.g., the original image) .
  • the image quality of the object may be represented by a count of pixels per unit area of an image area of the object.
  • a distance between an object and a capture device refers to a straight-line distance from the object to a specific point (e.g., a center point) of the capture device.
  • the type of the object may include an infant, a child, an adolescent, an adult, etc.
  • the type of the object may also include a visitor, an internal staff, etc.
  • the first processing device 110 may extract the feature (s) of the object by analyzing the object in the original image. For example, the first processing device 110 may determine the angle of the object, the image quality of the object, the distance between the object and the capture device, and/or the type of the object based on a machine learning model (e.g., a neural network model, a regression model, a classification tree model) . As another example, the first processing device 110 may determine the distance between the object and the capture device based on image coordinates of the object in an image coordinate system and parameters (e.g., intrinsic parameters, extrinsic parameters) of the capture device.
  • a machine learning model e.g., a neural network model, a regression model, a classification tree model
  • an image coordinate system refers to a coordinate system that describes positions of an object in an image captured by a capture device.
  • an origin may be an upper-left corner point of the image.
  • the X-axis may be from a left side to a right side of the image, and the Y-axis may be from an upper side to a lower side of the image.
  • the first preset condition may include that the angle of the object is less than a first angle threshold, the image quality of the object is greater than a first quality threshold, the distance between the object and the capture device is less than a first distance threshold, the type of the object belongs to a preset type, or the like, or any combination thereof.
  • the first preset condition (e.g., the first angle threshold, the first quality threshold, the first distance threshold, the preset type) may be manually set by a user of the image processing system 100, or automatically set by one or more components (e.g., the first processing device 110) of the image processing system 100 according to different situations.
  • the first preset condition may be determined based on a monitoring requirement, a monitoring environment, etc.
  • the required accuracy of image processing e.g., masking the one or more objects in the original image
  • the first angle threshold may be set relatively large
  • the first quality threshold may be set relatively low
  • the first distance threshold may be set relatively large.
  • the first processing device 110 may perform a first processing operation on at least part of the at least one first object that satisfies the first preset condition.
  • the first processing device 110 may determine whether a count of the at least one first object that satisfies the first preset condition exceeds the computing capacity of the first processing device 110. In response to determining that the count of the at least one first object that satisfies the first preset condition does not exceed the computing capacity of the first processing device 110, the first processing device 110 may perform the first processing operation on the at least one first object that satisfies the first preset condition. In response to determining that the count of the at least one first object exceeds the computing capacity of the first processing device 110, the first processing device 110 may perform the first processing operation on a part of the at least one first object based on at least one feature of each of the at least one first object.
  • the first processing device 110 may perform the first processing operation on the part of the at least one first object based on at least one priority corresponding to the at least one feature respectively.
  • a priority corresponding to a feature refers to a priority of performing a first operation on an object including the feature.
  • the priority corresponding to the feature may indicate an importance of the feature in the first processing operation.
  • a priority corresponding to “angle” may be higher than a priority corresponding to “image quality. ”
  • a priority corresponding to “distance” may be higher than the priority corresponding to “image quality. ”
  • the priority corresponding to “angle” > the priority corresponding to “image quality” > the priority corresponding to “distance. ”
  • the first processing device 110 may determine whether a count of first object (s) with an angle of the first object less than the first angle threshold exceeds the computing capacity of the first processing device 110. In response to determining that the count of first object (s) with the angle of the first object less than the first angle threshold exceeds the computing capacity of the first processing device 110, the first processing device 110 may determine a ranking of the first object (s) with the angle of the first object less than the first angle threshold according to the angle of each of the first object (s) in an ascending order.
  • the first processing device 110 may determine a ranking of the first objects as: first object A, first object B, first object C. Further, the first processing device 110 may perform the first processing operation on one or more of the first object (s) with the angle of the first object less than the first angle threshold according to the ranking of the first object (s) based on the computing capacity of the first processing device 110.
  • the first processing device 110 may perform the first processing operation on the first object (s) with the angle of the first object less than the first angle threshold. The first processing device 110 may then determine whether a count of first object (s) with the image quality of the first object greater than the first quality threshold exceeds the remaining computing capacity of the first processing device 110.
  • the first processing device 110 may determine a ranking of the first object (s) with the image quality of the first object greater than the first quality threshold according to the image quality of each of the first object (s) in a descending order.
  • the first processing device 110 may perform the first processing operation on one or more of the first object (s) with the image quality of the first object greater than the first quality threshold according to the ranking of the first object (s) based on the remaining computing capacity of the first processing device 110.
  • the first processing device 110 may further determine whether a count of first object (s) with the distance between the first object and the capture device less than the first distance threshold exceeds the remaining computing capacity of the first processing device 110.
  • the first processing device 110 may determine a ranking of the first object (s) with the distance between the first object and the capture device less than the first distance threshold according to the distance between each of the first object (s) and the capture device in an ascending order.
  • the first processing device 110 may perform the first processing operation on one or more of the first object (s) with the distance between the first object and the capture device less than the first distance threshold according to the ranking of the first object (s) based on the remaining computing capacity of the first processing device 110.
  • the first processing device 110 may determine at least one weight corresponding to the at least one feature respectively.
  • a weight corresponding to a feature of an object may indicate an importance of the feature in the first processing operation.
  • the first processing device 110 may determine a weighted result based on the at least one weight corresponding to the at least one feature respectively and feature value (s) of the at least feature of the first object.
  • the first processing device 110 may determine the feature value (s) according to a mapping approach. Take “distance” as an example, the first processing device 110 may define a specific range (e.g., 0 ⁇ 1) , map the distance between the first object and the capture device into the specific range, and determine a feature value corresponding to “distance” based on a mapping value corresponding to the distance.
  • a mapping approach e.g., 0 ⁇ 1
  • the first processing device 110 may determine a corresponding mapping value as 0; it is assumed that the distance between the first object and the capture device is smaller than a second threshold (e.g., 0.5 m) , the first processing device 110 may determine a corresponding mapping value as 1. Accordingly, the smaller the distance between the first object and the capture device is, the larger the corresponding mapping value may be.
  • the first threshold and/or the second threshold may be default settings of the image processing system 100 or may be adjustable under different situations.
  • the processing deice 120 may then determine a weighted result for the first object based on the feature value (s) of the first object and weight (s) corresponding to the feature (s) .
  • a first weight corresponding to a first feature e.g., “angle”
  • a second weight corresponding to a second feature e.g., “image quality”
  • a third weight corresponding to a third feature e.g., “distance”
  • a value of the first feature of a first object A is A1
  • a value of the second feature of the first object A is A2
  • a value of the third feature of the first object A is A3
  • a value of the first feature of a first object B is B1
  • a value of the second feature of the first object B is B2
  • a value of the third feature of the first object B is B3, a value of the first feature of a first object C is C1, a value of the second feature of the first object C is
  • the first processing device 110 may perform the first processing operation on the part of the at least one first object that satisfies the first preset condition based on the weighted result (s) .
  • the first processing device 110 may determine a ranking of the at least one first object that satisfies the first preset condition according to the weighted result for each of the at least one first object in a descending order.
  • the first processing device 110 may perform the first processing operation on one or more of the at least one first object that satisfies the first preset condition according to the ranking of the at least one first object based on the computing capacity of the first processing device 110.
  • the first processing device 110 in response to determining that the count of the one or more objects in the original image does not exceed the computing capacity of the first processing device 110, the first processing device 110 (e.g., the processing module 430) may perform the first processing operation on the one or more objects in the original image.
  • the first processing device 110 e.g., the processing module 430
  • the processing device 110 may perform a masking operation on all of the one or more objects in the original image.
  • the first processing device 110 may transmit a processing result of the first processing operation to a second processing device.
  • the processing result of the first processing operation may include information associated with the one or more objects in the original image.
  • the processing result of the first processing operation may include information associated with object (s) that do not satisfy the first preset condition, information associated with processed object (s) that satisfy the first preset condition, information associated with unprocessed object (s) that satisfy the first preset condition, or the like, or any combination thereof.
  • the information associated with each of the one or more objects in the original image may include the at least one feature of the object, coordinate information of the object, mark information of the object, or the like, or any combination thereof.
  • the coordinate information of the object may include image coordinates of the object in the image coordinate system.
  • the mark information of the object may indicate whether the object satisfies the first preset condition, whether the first processing operation is performed on the object, or the like, or any combination thereof.
  • an object with a mark of “0” may indicate that the object does not satisfy the first preset condition
  • an object with a mark of “10” may indicate that the object satisfies the first preset condition and the first processing operation is not performed on the object
  • an object with a mark of “11” may indicate that the object satisfies the first preset condition and the first processing operation is performed on the object.
  • the first processing device 110 may transmit the original image, a processed image (i.e., an image obtained after the first processing operation is performed on the original image) , and the processing result of the first processing operation to the second processing device 160.
  • a processed image i.e., an image obtained after the first processing operation is performed on the original image
  • the second processing device 160 may determine at least one unprocessed object that satisfies the first preset condition based on the processing result of the first processing operation. For example, the second processing device 160 may determine at least one unprocessed object that satisfies the first preset condition based on the mark information of each of the one or more objects. As another example, the second processing device 160 may determine the at least one unprocessed object that satisfies the first preset condition based on the at least one feature of each of the one or more objects.
  • the second processing device 160 may perform a second processing operation (e.g., a masking operation) on the at least one unprocessed object that satisfies the first preset condition.
  • a second processing operation e.g., a masking operation
  • the second processing device 160 may perform the second processing operation on the at least one unprocessed object that satisfies the first preset condition based on coordinate information of each of the at least one unprocessed object.
  • the second processing device 160 may determine whether a difference between the count of the one or more objects in the original image and a count of processed objects is greater than a threshold based on the processing result of the first processing operation. In response to determining that the difference between the count of the one or more objects in the original image and the count of the processed objects is greater than the threshold, the second processing device 160 may select, form the one or more objects, at least one second object that satisfies a second preset condition. The second processing device 160 may perform the second processing operation on the at least one second object that satisfies the second preset condition. More descriptions for performing the second processing operation on the at least one second object that satisfies the second preset condition may be found elsewhere in the present disclosure (e.g., FIG. 9 and descriptions thereof) .
  • the at least one first object that satisfies the first preset condition may be selected from the one or more objects, and the first processing operation may be performed on the at least part of the at least one first object that satisfies the first preset condition.
  • a part of the identified objects in the original image may be processed selectively and effectively by the first processing device, and the other part of the identified objects in the original image may be processed by the second processing device, which can prevent the identified object in the original image from being unprocessed due to the limited computing capacity of the first processing device, and improve the accuracy and efficiency of image processing.
  • the first processing operation may be performed on object (s) that satisfy the first preset condition (e.g., object (s) with a relatively small angle of the object, a relatively high image quality of the object, and/or a relatively short distance between the object and the capture device) by the first processing device 110, and object (s) that do not satisfy the first preset condition may not need to be processed or may be processed by the second processing device 160, to save the computing capacity of the first processing device 110.
  • the processing methods of the object in different images may be different, which can achieve a dynamic image processing.
  • the first processing device 110 may determine a priority corresponding to each type of feature information in the feature of the object respectively. For example, for the feature of the type of the object, a priority corresponding to “infant (or a child) ” may be higher than a priority corresponding to “adult. ” As another example, a priority corresponding to “visitor” may be higher than a priority corresponding to “internal staff. ”
  • FIG. 7B is a schematic diagram illustrating an exemplary process for performing a first processing operation on a plurality of objects based on priorities corresponding to features respectively according to some embodiments of the present disclosure.
  • a first priority corresponding to a feature A is higher than a second priority corresponding to a feature B (e.g., “image quality” )
  • the second priority corresponding to the feature B is higher than a third priority corresponding to a feature C (e.g., “distance” )
  • the first processing device 110 may first perform a first processing operation on object A, object B, and object C that satisfy a preset condition associated with the feature A (e.g., a condition that the angle of the object is less than a first angle threshold) .
  • the first processing device 110 may then perform the first processing operation on object D and object E that satisfy a preset condition associated with the feature B (e.g., a condition that the image quality of the object is greater than a first quality threshold) .
  • the first processing device 110 may further perform the first processing operation on object F and object G that satisfy a preset condition associated with the feature C (e.g., a condition that the distance between the object and the capture device is less than a first distance threshold) .
  • a count of processed objects by the first processing device 110 may be determined based on a computing capacity of the first processing device 110.
  • FIG. 7C is a schematic diagram illustrating an exemplary process for performing a first processing operation on a plurality of objects based on a weighted result for each of the plurality of objects according to some embodiments of the present disclosure.
  • a first weight corresponding to a feature A (e.g., “angle” ) is 0.3
  • a second weight corresponding to feature B (e.g., “image quality” ) is 0.5
  • a third weight corresponding to a feature C (e.g., “distance” ) is 0.2.
  • a value of the feature A of an object A is 0.5
  • a value of the feature B of the object A is 0.6
  • a value of the feature C of the object A is 0.2
  • the first processing device 110 may then determine a ranking of the plurality of objects (i.e., object A, object B, and object C) based on the weighted result for each of the plurality of objects in a descending order. For example, the first processing device 110 may determine the ranking of the plurality of objects as: object C > object A > object B. The first processing device 110 may further perform a first processing operation on one or more of the plurality of objects according to the ranking of the plurality of objects based on a computing capacity of the first processing device 110.
  • FIG. 8A is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure.
  • the process 800 may be executed by the image processing system 100.
  • the process 800 may be implemented as a set of instructions stored in the storage 390.
  • the processor 220 and/or the module (s) in FIG. 4 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the module (s) may be configured to perform the process 800.
  • the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 800 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 800 as illustrated in FIG. 8A and described below is not intended to be limiting.
  • the first processing device 110 may obtain a second image that is adjacently captured after an original image.
  • the original image and the second image may be continuous in time.
  • an image A and an image B being continuous in time refers that a time difference between a first time point corresponding to the image A and a second time point corresponding to the image B is less than a threshold.
  • a time point corresponding to an image refers to a time point when the image is acquired (e.g., by a capture device) .
  • the original image and the second image may be consecutive image frames in a video.
  • the first processing device 110 may identify one or more objects in the second image.
  • Operation 820 may be performed in a similar manner as operation 620 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
  • the first processing device 110 may determine whether there is a static repeating object in the second image based on one or more objects in the original image and the one or more objects in the second image.
  • the first processing device 110 may extract feature information of each of the one or more objects in the original image.
  • the feature information may include a shape (e.g., a contour, an area, a height, a width, a ratio of height to width) , a color, a texture, or the like, or any combination thereof, of the object or a portion of the object, such as a face component (e.g., eyes, the nose, the mouth) of the object.
  • the first processing device 110 may detect a human face in the original image according to one or more face detection algorithms as described elsewhere in the present disclosure.
  • the first processing device 110 may extract the feature information of the human face according to one or more feature extraction algorithms.
  • Exemplary feature extraction algorithms may include a principal component analysis (PCA) , a linear discriminant analysis (LDA) , an independent component analysis (ICA) , a multi-dimensional scaling (MDS) algorithm, a discrete cosine transform (DCT) algorithm, or the like, or any combination thereof.
  • PCA principal component analysis
  • LDA linear discriminant analysis
  • ICA independent component analysis
  • MDS multi-dimensional scaling
  • DCT discrete cosine transform
  • the first processing device 110 may then extract feature information of each of the one or more objects in the second image.
  • the extraction of the feature information of the object in the second image may be performed in a similar manner as that of the feature information of the object in the original image.
  • the first processing device 110 may further determine whether there is a static repeating object in the second image based on the feature information of each of the one or more objects in the original image and the feature information of each of the one or more objects in the second image. For example, for each object in the second image, the first processing device 110 may determine a degree of similarity between the object in the second image and each object in the original image based on the feature information of the object in the second image and the feature information of the each object in the original image. In response to determining that a degree of similarity between an object in the second image and an object in the original image is greater than a similarity threshold (e.g., 95%, 99%) , the first processing device 110 may determine the object in the second image as the static repeating object in the second image.
  • a similarity threshold e.g. 95%, 99%
  • a degree of similarity between an object A (e.g., an object in the original image) and an object B (e.g., an object in the second image) may be determined by various approaches.
  • the first processing device 110 may determine a first feature vector representing the feature information of the object A (also referred to as the first feature vector corresponding to the object A) .
  • the first processing device 110 may determine a second feature vector representing the feature information of the object B (also referred to as the second feature vector corresponding to the object B) .
  • the first processing device 110 may determine the degree of similarity between the object A and the object B by determining a degree of similarity between the first feature vector and the second feature vector.
  • a degree of similarity between two feature vectors may be determined based on a similarity algorithm, for example, a Euclidean distance algorithm, a Manhattan distance algorithm, a Minkowski distance algorithm, a cosine similarity algorithm, a Jaccard similarity algorithm, a Pearson correlation algorithm, or the like, or any combination thereof.
  • a similarity algorithm for example, a Euclidean distance algorithm, a Manhattan distance algorithm, a Minkowski distance algorithm, a cosine similarity algorithm, a Jaccard similarity algorithm, a Pearson correlation algorithm, or the like, or any combination thereof.
  • the first processing device 110 may determine whether there is a static repeating object in the second image based on the one or more objects in the original image and the one or more objects in the second image using a trained machine learning model (e.g., a neural network model, a regression model, a classification tree model) .
  • a trained machine learning model e.g., a neural network model, a regression model, a classification tree model
  • FIG. 8B is a schematic diagram illustrating an exemplary static repeating object according to some embodiments of the present disclosure.
  • a first image 801 and a second image 802 are adjacent images captured by a capture device (e.g., the capture device 120) .
  • An object A and an object B are identified in the first image 801.
  • An object C and an object D are identified in the second image 802.
  • the object C is a static repeating object in the second image 802 corresponding to the object A in the first image 801.
  • the first processing device 110 in response to determining that there is a static repeating object in the second image, the first processing device 110 (e.g., the processing module 430) may determine that the static repeating object satisfies a first preset condition (e.g., the first present condition as described in connection with operation 750 in FIG. 7A) .
  • a first preset condition e.g., the first present condition as described in connection with operation 750 in FIG. 7A
  • the first processing device 110 may obtain a processing result of a first processing operation associated with the original image.
  • the first processing device 110 may determine whether an object in the original image corresponding to the static repeating object in the second image satisfies the first preset condition based on the processing result of the first processing operation (e.g., mark information of the object in the original image) associated with the original image.
  • the first processing device 110 may determine that the static repeating object satisfies the first preset condition. Further, the first processing device 110 may perform the first processing operation on the static repeating object in the second image.
  • a determination may be made as whether there is a static repeating object in the second image, and in response to determining that there is a static repeating object in the second image, the static repeating object may be processed based on the processing result of the first processing operation associated with the original image. For example, if the object in the original image corresponding to the static repeating object in the second image satisfies the first preset condition, the static repeating object in the second image may also be considered to satisfies the first preset condition. Therefore, it is not necessary to determine whether the static repeating object in the second image satisfies the first preset condition, which can save time of image processing, and improve the efficiency of image processing.
  • FIG. 9 is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure.
  • the process 900 may be executed by the image processing system 100.
  • the process 900 may be implemented as a set of instructions stored in the storage 390.
  • the processor 220 and/or the module (s) in FIG. 5 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the module (s) may be configured to perform the process 900.
  • the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 900 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 900 as illustrated in FIG. 9 and described below is not intended to be limiting.
  • the second processing device 160 may obtain a processing result of a first processing operation from a first processing device.
  • the processing result of the first processing operation may include information associated with one or more objects in an original image, as described in connection with operation 770 in FIG. 7A.
  • the second processing device 160 may obtain the original image, a processed image (i.e., an image obtained after the first processing operation is performed on the original image) , and the processing result of the first processing operation from the first processing device 110 via the network 150.
  • the second processing device 160 may determine whether a difference between a count of one or more objects in an original image and a count of processed objects is greater than a threshold based on the processing result of the first processing operation.
  • the second processing device 160 may determine the count of the one or more objects in the original image and the count of the processed objects based on the processing result of the first processing operation. The second processing device 160 may determine the difference between the count of the one or more objects in the original image and the count of the processed objects by subtracting the count of the processed objects from the count of the one or more objects in then original image.
  • the threshold may be manually set by a user of the image processing system 100, or automatically set by one or more components (e.g., the first processing device 110, the second processing device 160) of the image processing system 100 according to different situations.
  • the second processing device 160 in response to determining that the difference between the count of the one or more objects in the original image and the count of processed objects is greater than the threshold, the second processing device 160 (e.g., the identification module 520) may select, form the one or more objects, at least one second object that satisfies a second preset condition.
  • the count of the one or more objects in the original image in response to determining that the difference between the count of the one or more objects in the original image and the count of the processed objects is greater than the threshold, it may indicate that the count of the one or more objects in the original image far exceeds a computing capacity of the first processing device 110, that is, the count of the one or more objects in the original image is far greater than the maximum count of objects that the first processing device 110 is able to process.
  • the second processing device 160 may then select, form the one or more objects, the at least one second object that satisfies the second preset condition based on at least one feature of each of the one or more objects.
  • the second preset condition may be different from the first preset condition.
  • the second preset condition may include that an angle of the object is less than a second angle threshold, an image quality of the object is greater than a second quality threshold, a distance between the object and a capture device is less than a second distance threshold, or the like, or any combination thereof.
  • the second angle threshold may be greater than a first angle threshold in the first preset condition.
  • the second quality threshold may be less than a first quality threshold in the first preset condition.
  • the second distance threshold may be greater than a first distance threshold in the first preset condition. Accordingly, when the computing capacity required to process all identified object (s) in the original image exceeds the computing capacity of the first processing device 110, the first processing device 110 may perform the first processing operation on a part of the identified object (s) in the original image that satisfies the first preset condition, and the second processing device 160 may perform the second processing operation on the other part of the identified object (s) in the original image that satisfies the second preset condition, and the second preset condition may be different from the first preset condition, which can ensure the effective protection of sensitive and private information associated with the original image.
  • the second processing device 160 may perform a second processing operation on the at least one second object that satisfies the second preset condition.
  • the second processing device 160 may perform a masking operation, a coding operation, a blurring operation, a cutting operation, or the like, or any combination thereof, on the at least one second object that satisfies the second preset condition.
  • the second processing device 160 in response to determining that the difference between the count of the one or more objects in the original image and the count of processed objects is not greater than the threshold, the second processing device 160 (e.g., the processing module 540) may perform the second processing operation on at least one unprocessed object that satisfies a first preset condition.
  • the second processing device 160 may determine the at least one unprocessed object that satisfies the first preset condition based on the processing result of the first processing operation. For example, the second processing device 160 may determine the at least one unprocessed object that satisfies the first preset condition based on mark information of the one or more objects. As another example, the second processing device 160 may determine the at least one unprocessed object that satisfies the first preset condition based on at least one feature of each of the one or more objects. Further, the second processing device 160 may perform the second processing operation (e.g., a masking operation) on the at least one unprocessed object that satisfies the first preset condition. For example, the second processing device 160 may perform the second processing operation on the at least one unprocessed object that satisfies the first preset condition based on coordinate information of the at least one unprocessed object.
  • the second processing device 160 may perform the second processing operation on the at least one unprocessed object that satisfie
  • FIG. 10 is a schematic diagram illustrating an exemplary processing result of a first processing operation according to some embodiments of the present disclosure.
  • a processing result 1000 of a first processing operation associated with an original image may include one or more objects (e.g., an object C, an object D) that do not satisfy the a preset condition, one or more processed objects (e.g., an object A, an object B, an object F) that satisfy the first preset condition, and one or more unprocessed objects (e.g., an object E, an object G) that satisfy the first preset condition.
  • objects e.g., an object C, an object D
  • processed objects e.g., an object A, an object B, an object F
  • unprocessed objects e.g., an object E, an object G
  • the object A, the object B, and the object F are marked with marks “11” which indicates that the object A, the object B, and the object F satisfy the first preset condition, and the first processing operation is performed on the object A, the object B, and the object F.
  • the object E and the object G are marked with marks “10” which indicates that the object E and the object G satisfy the first preset condition, and the first processing operation is not performed on the object E and the object G.
  • FIG. 11 is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure.
  • the process 1100 may be executed by the image processing system 100.
  • the process 1100 may be implemented as a set of instructions stored in the storage 390.
  • the processor 220 and/or the module (s) in FIG. 4 and FIG. 5 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the module (s) may be configured to perform the process 1100.
  • the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 1100 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 1100 as illustrated in FIG. 11 and described below is not intended to be limiting.
  • the first processing device 110 may obtain an original image captured by a capture device.
  • Operation 1101 may be performed in a similar manner as operation 610 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
  • the first processing device 110 may identify one or more objects in the original image.
  • Operation 1102 may be performed in a similar manner as operation 620 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
  • the first processing device 110 may determine whether a count of the one or more objects in the original image exceeds a computing capacity of the first processing device 110.
  • Operation 1103 may be performed in a similar manner as operation 730 as described in connection with FIG. 7A, and the descriptions thereof are not repeated here.
  • the first processing device 110 in response to determining that the count of the one or more objects in the original image does not exceed the computing capacity of the first processing device 110, the first processing device 110 (e.g., the processing module 430) may perform a first processing operation on the one or more objects in the original image.
  • the first processing device 110 e.g., the processing module 430
  • Operation 1104 may be performed in a similar manner as operation 760 as described in connection with FIG. 7A, and the descriptions thereof are not repeated here.
  • the first processing device 110 may transmit a processing result of the first processing operation to a second processing device 160.
  • the first processing device 110 in response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110, the first processing device 110 (e.g., the processing module 430) may select, from the one or more objects, at least one first object that satisfies a first preset condition.
  • Operation 1105 may be performed in a similar manner as operation 740 as described in connection with FIG. 7A, and the descriptions thereof are not repeated here.
  • the first processing device 110 may determine whether a count of the at least one first object that satisfies the first preset condition exceeds the computing capacity of the first processing device.
  • Operation 1106 may be performed in a similar manner as operation 750 as described in connection with FIG. 7A, and the descriptions thereof are not repeated here.
  • the first processing device 110 in response to determining that the count of the at least one first object that satisfies the first preset condition does not exceed the computing capacity of the first processing device 110, the first processing device 110 (e.g., the processing module 430) may perform the first processing operation on the at least one first object that satisfies the first preset condition.
  • Operation 1107 may be performed in a similar manner as operation 750 as described in connection with FIG. 7A, and the descriptions thereof are not repeated here.
  • the first processing device 110 may transmit the processing result of the first processing operation to the second processing device 160.
  • the first processing device 110 in response to determining that the count of the at least one first object exceeds the computing capacity of the first processing device 110, the first processing device 110 (e.g., the processing module 430) may perform the first processing operation on at least part of the at least one first object that satisfies the first preset condition.
  • Operation 1108 may be performed in a similar manner as operation 750 as described in connection with FIG. 7A, and the descriptions thereof are not repeated here.
  • the first processing device 110 may transmit a processing result of the first processing operation to a second processing device 160.
  • the second processing device 160 may determine at least one unprocessed object that satisfies the first preset condition based on the processing result of the first processing operation.
  • Operation 1109 may be performed in a similar manner as operation 640 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
  • the second processing device 160 may perform a second processing operation on the at least one unprocessed object that satisfies the first preset condition.
  • Operation 1110 may be performed in a similar manner as operation 640 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
  • the second processing device 160 may indicate that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110.
  • the second processing device 160 e.g., the identification module 520
  • the second processing device 160 may re-identify object (s) in the original image.
  • the second processing device 160 may perform the second processing operation on at least part of the one or more new objects identified in the original image.
  • the second processing device 160 may select, from the one or more new objects, at least one new object that satisfies the first preset condition, and perform the second processing operation on the at least one new object that satisfies the first preset condition. In some embodiments, the second processing device 160 may perform the second processing operation on all the one or more new objects in the original image.
  • a parameter matching operation may be performed on the first processing device 110 and the second processing device 160, such that the first preset condition stored in the first processing device 110 and the first preset condition stored in the second processing device 160 are the same.
  • the methods and systems for image processing disclosed in the present disclosure may be applied in a monitoring system for a shopping mall or a plaza.
  • a large number of objects e.g., human faces
  • a capture device of the monitoring system may need to be processed (e.g., masked) , and a count of the objects may exceed the computing capacity of a first processing device 110 of the capture device.
  • a plurality of objects identified in an image captured by a capture device may be processed (e.g., masked) based on an order of object identification time or positions of the plurality of objects in the image. For example, a first object may first be identified in the original image, a second object may then be identified in the original image, and a third object may further be identified in the original image, the first processing device 110 may first process the first object, then process the second object, further process the third object.
  • a count of identified objects exceeds a computing capacity of a processing device of the capture device, subsequent identified object (s) may not be processed due to the limited computing capacity of the processing device of the capture device.
  • the plurality of objects in the image may be processed from a top to a bottom (or from a left side to a right side) of the image.
  • a count of processed objects exceeds the computing capacity of the processing device of the capture device, object (s) in at least one area of the image may not be processed due to the limited computing capacity of the processing device of the capture device.
  • the first processing device 110 may perform the first processing operation on a part of the identified object (s) in the original image, and the second processing device 160 may perform the second processing operation on the other part of the identified object (s) in the original image, which can improve the efficiency of image processing, and ensure the effective protection of sensitive and private information associated with the original image.
  • aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a “unit, ” “module, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present disclosure may be written in a combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the “C” programming language, Visual Basic, Fortran 2103, Perl, COBOL 2102, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
  • the program code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
  • LAN local area network
  • WAN wide area network
  • SaaS Software as a Service
  • the numbers expressing quantities or properties used to describe and claim certain embodiments of the application are to be understood as being modified in some instances by the term “about, ” “approximate, ” or “substantially. ”
  • “about, ” “approximate, ” or “substantially” may indicate ⁇ 20%variation of the value it describes, unless otherwise stated.
  • the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment.
  • the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The present disclosure is related to systems and methods for image processing. The method includes obtaining an original image captured by a capture device. The method includes identifying one or more objects in the original image by a first processing device. The method includes performing a first processing operation on at least a first part of the one or more objects by the first processing device. The method includes performing a second processing operation on at least a second part of the one or more objects by a second processing device.

Description

SYSTEMS AND METHODS FOR IMAGE PROCESSING
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority of Chinese Patent Application No. 202210326603.5, filed on March 30, 2022, the contents of which are hereby incorporated by reference.
TECHNICAL FIELD
The present disclosure generally relates to systems and methods for image processing, and more particularly, relates to systems and methods for privacy processing of objects in an image.
BACKGROUND
Video surveillance systems are widely used in a variety of applications to detect and monitor objects within an environment. Usually, image data obtained by a video surveillance system includes various types of sensitive and private information of an object (e.g., a person) . The sensitive and private information needs to be processed in order to protect personal privacy while ensuring the security of public environment. However, the amount of the image data obtained by the video surveillance system may be relatively large, and a computing capacity of the video surveillance system may be limited. Therefore, it is desirable to provide effective systems or methods for image processing associated with privacy processing in the video surveillance system.
SUMMARY
According to an aspect of the present disclosure, a system for image processing may include at least one storage medium including a set of instructions, and at least one processor in communication with the at least one storage medium. When executing the set of instructions, the at least one processor may be directed to cause the system to perform a method. The method may include obtaining an original image captured by a capture device. The method may include identifying one or more objects in the original image by a first processing device. The method may include performing a first processing operation on at least a first part of the one or more objects by the first processing device. The method may include performing a second processing operation on at least a second part of the one or more objects by a second processing device.
In some embodiments, the method may include determining whether a count of the one or more objects in the original image exceeds a computing capacity of the first processing device. The method may include, in response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device, selecting, from the one or more objects, at least one first object that satisfies a first preset condition. The method may include performing the first processing operation on at least part of the at least one first object that satisfies the first preset condition.
In some embodiments, the method may include extracting at least one feature of each of the at least one first object that satisfies the first preset condition. The method may include  performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the at least one feature.
In some embodiments, the at least one feature of each of the at least one first object may include at least one of an angle of the first object, an image quality of the first object, or a distance between the first object and the capture device.
In some embodiments, the at least one feature of each of the at least one first object may include a type of the first object.
In some embodiments, the method may include performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on at least one priority corresponding to the at least one feature respectively.
In some embodiments, the method may include determining at least one weight corresponding to at least one feature respectively. The method may include determining a weighted result based on the at least one weight corresponding to the at least one feature respectively. The method may include performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the weighted result.
In some embodiments, the method may include determining whether a difference between the count of the one or more objects in the original image and a count of processed objects is greater than a threshold based on a processing result of the first processing operation. The method may include, in response to determining that the difference between the count of the one or more objects in the original image and the count of the processed objects is greater than the threshold, selecting, form the one or more objects, at least one second object that satisfies a second preset condition. The method may include performing the second processing operation on the at least one second object that satisfies the second preset condition.
In some embodiments, the first preset condition may be different from the second preset condition.
In some embodiments, the method may include determining at least one unprocessed object that satisfies the first preset condition based on a processing result of the first processing operation. The method may include performing the second processing operation on the at least one unprocessed object that satisfies the first preset condition.
In some embodiments, the first processing device may be a processing device of the capture device, and the second processing device may be a back-end processing device.
In some embodiments, the method may include obtaining a second image that is adjacently captured after the original image. The method may include identifying one or more objects in the second image by the first processing device. The method may include determining whether there is a static repeating object in the second image based on the one or more objects in the original image and the one or more objects in the second image. The  method may include, in response to determining that there is a static repeating object in the second image, determining that the static repeating object satisfies the first preset condition.
According to another aspect of the present disclosure, a method for image processing may be provided. The method may include obtaining an original image captured by a capture device. The method may include identifying one or more objects in the original image by a first processing device. The method may include performing a first processing operation on at least a first part of the one or more objects by the first processing device. The method may include performing a second processing operation on at least a second part of the one or more objects by a second processing device.
In some embodiments, the method may include determining whether a count of the one or more objects in the original image exceeds a computing capacity of the first processing device. The method may include, in response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device, selecting, from the one or more objects, at least one first object that satisfies a first preset condition. The method may include performing the first processing operation on at least part of the at least one first object that satisfies the first preset condition.
In some embodiments, the method may include extracting at least one feature of each of the at least one first object that satisfies the first preset condition. The method may include performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the at least one feature.
In some embodiments, the at least one feature of each of the at least one first object may include at least one of an angle of the first object, an image quality of the first object, or a distance between the first object and the capture device.
In some embodiments, the at least one feature of each of the at least one first object may include a type of the first object.
In some embodiments, the method may include performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on at least one priority corresponding to the at least one feature respectively.
In some embodiments, the method may include determining at least one weight corresponding to at least one feature respectively. The method may include determining a weighted result based on the at least one weight corresponding to the at least one feature respectively. The method may include performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the weighted result.
In some embodiments, the method may include determining whether a difference between the count of the one or more objects in the original image and a count of processed objects is greater than a threshold based on a processing result of the first processing operation. The method may include, in response to determining that the difference between the count of the  one or more objects in the original image and the count of the processed objects is greater than the threshold, selecting, form the one or more objects, at least one second object that satisfies a second preset condition. The method may include performing the second processing operation on the at least one second object that satisfies the second preset condition.
In some embodiments, the first preset condition may be different from the second preset condition.
In some embodiments, the method may include determining at least one unprocessed object that satisfies the first preset condition based on a processing result of the first processing operation. The method may include performing the second processing operation on the at least one unprocessed object that satisfies the first preset condition.
In some embodiments, the first processing device may be a processing device of the capture device, and the second processing device may be a back-end processing device.
In some embodiments, the method may include obtaining a second image that is adjacently captured after the original image. The method may include identifying one or more objects in the second image by the first processing device. The method may include determining whether there is a static repeating object in the second image based on the one or more objects in the original image and the one or more objects in the second image. The method may include, in response to determining that there is a static repeating object in the second image, determining that the static repeating object satisfies the first preset condition.
According to still another aspect of the present disclosure, a non-transitory computer readable medium may include executable instructions that, when executed by at least one processor, direct the at least one processor to perform a method. The method may include obtaining an original image captured by a capture device. The method may include identifying one or more objects in the original image by a first processing device. The method may include performing a first processing operation on at least a first part of the one or more objects by the first processing device. The method may include performing a second processing operation on at least a second part of the one or more objects by a second processing device.
Additional features will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The features of the present disclosure may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities and combinations set forth in the detailed examples discussed below.
BRIEF DESCRIPTION OF THE DRAWINGS
The present disclosure is further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
FIG. 1 is a schematic diagram illustrating an exemplary image processing system according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure;
FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure;
FIG. 4 is a block diagram illustrating an exemplary first processing device according to some embodiments of the present disclosure;
FIG. 5 is a block diagram illustrating an exemplary second processing device according to some embodiments of the present disclosure;
FIG. 6 is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure;
FIG. 7A is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure;
FIG. 7B is a schematic diagram illustrating an exemplary process for performing a first processing operation on a plurality of objects based on priorities corresponding to features respectively according to some embodiments of the present disclosure;
FIG. 7C is a schematic diagram illustrating an exemplary process for performing a first processing operation on a plurality of objects based on a weighted result for each of the plurality of objects according to some embodiments of the present disclosure;
FIG. 8A is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure;
FIG. 8B is a schematic diagram illustrating an exemplary static repeating object according to some embodiments of the present disclosure;
FIG. 9 is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure;
FIG. 10 is a schematic diagram illustrating an exemplary processing result of a first processing operation according to some embodiments of the present disclosure; and
FIG. 11 is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure.
DETAILED DESCRIPTION
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. However, it should be apparent to those skilled in the art that the present disclosure may be practiced without such details. In other instances, well-known methods, procedures, systems, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid  unnecessarily obscuring aspects of the present disclosure. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown, but to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a, ” “an, ” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise, ” “comprises, ” and/or “comprising, ” “include, ” “includes, ” and/or “including, ” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood that the term “system, ” “engine, ” “unit, ” “module, ” and/or “block” used herein are one method to distinguish different components, elements, parts, sections, or assembly of different level in ascending order. However, the terms may be displaced by another expression if they achieve the same purpose.
These and other features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, may become more apparent upon consideration of the following description with reference to the accompanying drawings, all of which form a part of this disclosure. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. It is understood that the drawings are not to scale.
An aspect of the present disclosure relates to systems and methods for image processing. According to some systems and methods of the present disclosure, an original image captured by a capture device may be obtained. One or more objects may be identified in the original image by a first processing device (e.g., a processing device of the capture device) . A first processing operation (e.g., a masking operation, a coding operation, a blurring operation, a cutting operation) may be performed on at least a first part of the one or more objects by the first processing device. A second processing operation (e.g., a masking operation, a coding operation, a blurring operation, a cutting operation) may be performed on at least a second part of the one or more objects by a second processing device (e.g., a back-end processing device) .
According to some embodiments of the present disclosure, due to a limited computing capacity of the first processing device, when a computing capacity required to process all identified object (s) in the original image exceeds a computing capacity of the first processing device, the first processing device may perform the first processing operation on a part of the  identified object (s) in the original image, and the second processing device may perform the second processing operation on the other part of the identified object (s) in the original image, which can improve the efficiency of image processing, and ensure the effective protection of sensitive and private information associated with the original image.
FIG. 1 is a schematic diagram illustrating an exemplary image processing system according to some embodiments of the present disclosure. In some embodiments, the image processing system 100 may include a first processing device 110, a capture device 120, a terminal device 130, a storage device 140, a network 150, and a second processing device 160. The components of the image processing system 100 may be connected to each other in one or more of various ways. Merely by way of example, the capture device 120 may be connected to the first processing device 110 through the network 150, or connected to the first processing device 110 directly as illustrated by the bidirectional dotted arrow connecting the capture device 120 and the first processing device 110 in FIG. 1. As another example, the capture device 120 may be connected to the storage device 140 through the network 150, or connected to the storage device 140 directly as illustrated by the bidirectional dotted arrow connecting the capture device 120 and the storage device 140 in FIG. 1. As still another example, the terminal device 130 may be connected to the storage device 140 through the network 150, or connected to the storage device 140 directly as illustrated by the bidirectional dotted arrow connecting the terminal device 130 and the storage device 140 in FIG. 1. As still another example, the terminal device 130 may be connected to the second processing device 160 through the network 150, or connected to the second processing device 160 directly as illustrated by the bidirectional dotted arrow connecting the terminal device 130 and the second processing device 160 in FIG. 1. As still another example, the first processing device 110 may be connected to the second processing device 160 through the network 150, or connected to the second processing device 160 directly as illustrated by the bidirectional dotted arrow connecting the first processing device 110 and the second processing device 160 in FIG. 1.
The first processing device 110 may process information and/or data to perform one or more functions described in the present disclosure. For example, first processing device 110 may obtain an original image captured by the capture device 120. As another example, the first processing device 110 may identify one or more objects in an original image. As still another example, the first processing device 110 may perform a first processing operation on at least a first part of one or more objects. As still another example, the first processing device 110 may transmit a processing result of a first processing operation to the second processing device 160. In some embodiments, the first processing device 110 may include one or more processing engines (e.g., single-core processing engine (s) or multi-core processor (s) ) . In some embodiments, the first processing device 110 may be a front-end inter-process communication (IPC) device. Merely by way of example, the first processing device 110 may include a central processing unit (CPU) , an application-specific integrated circuit (ASIC) , an application-specific  instruction-set processor (ASIP) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a digital signal processor (DSP) , a field programmable gate array (FPGA) , a programmable logic device (PLD) , a controller, a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
In some embodiments, the first processing device 110 may be connected to the network 150 to communicate with one or more components (e.g., the capture device 120, the terminal device 130, the storage device 140, and/or the second processing device 160) of the image processing system 100. In some embodiments, the first processing device 110 may be directly connected to or communicate with one or more components (e.g., the capture device 120, the terminal device 130, the storage device 140, and/or the second processing device 160) of the image processing system 100. In some embodiments, the first processing device 110 may be part of the capture device 120.
The capture device 120 may be configured to capture image data (e.g., an original image) of an object. The capture device 120 may be and/or include any suitable device that is capable of capturing image data of the object. In some embodiments, the capture device 120 may include a spherical camera, a hemispherical camera, a rifle camera, etc. In some embodiments, the capture device 120 may include a black-white camera, a color camera, an infrared camera, an X-ray camera, etc. In some embodiments, the capture device 120 may include a digital camera, an analog camera, etc. In some embodiments, the capture device 120 may include a monocular camera, a binocular camera, a multi-camera, etc. In some embodiments, the capture device 120 may be a network video recorder (NVR) , a X video recorder (XVR) , etc. In some embodiments, the capture device 120 may be an IP camera which can transmit the captured image data to any component (e.g., the first processing device 110, the terminal device 130, the storage device 140, the second processing device 160) of the image processing system 100 via the network 150.
In some embodiments, the capture device 120 may be a camera with intelligent detection functions. For example, the capture device 120 may include a processing device (e.g., the first processing device 110) configured to process the captured image data (e.g., identify one or more objects in the original image, perform a first processing operation on at least a part of the one or more objects in the original image) .
In some embodiments, the image data acquired by the capture device 120 may be transmitted to the first processing device 110 and/or the second processing device 160 for further analysis. Additionally or alternatively, the image data acquired by the capture device 120 may be transmitted to a terminal device (e.g., the terminal device 130) for display and/or a storage device (e.g., the storage device 140) for storage.
In some embodiments, the capture device 120 may be configured to capture the image data of the object continuously or intermittently (e.g., periodically) . In some embodiments, the acquisition of the image data by the capture device 120, the transmission of the captured image  data to the first processing device 110 (or the second processing device 160) , and the analysis of the image data may be performed substantially in real time so that the image data may provide information indicating a substantially real time status of the object.
In some embodiments, the terminal devices 130 may include a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, a telephone 130-4, or the like, or any combination thereof. In some embodiments, the mobile device 130-1 may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof.
The storage device 140 may store data and/or instructions. In some embodiments, the storage device 140 may store data obtained from the first processing device 110, the capture device 120, the terminal device 130, the second processing device 160, and/or an external storage device. For example, the storage device 140 may store an original image obtained from the capture device 120. As another example, the storage device 140 may store one or more identified objects in an original image determined by the first processing device 110. As another example, the storage device 140 may store a processing result of a first processing operation determined by the first processing device 110. In some embodiments, the storage device 140 may store data and/or instructions that the first processing device 110 and/or the second processing device 160 may execute or use to perform exemplary methods described in the present disclosure. In some embodiments, the storage device 140 may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof.
In some embodiments, the storage device 140 may be connected to the network 150 to communicate with one or more components (e.g., the first processing device 110, the capture device 120, the terminal device 130, the second processing device 160) of the image processing system 100. One or more components of the image processing system 100 may access the data or instructions stored in the storage device 140 via the network 150. In some embodiments, the storage device 140 may be directly connected to or communicate with one or more components (e.g., the first processing device 110, the capture device 120, the terminal device 130, the second processing device 160) of the image processing system 100. In some embodiments, the storage device 140 may be part of the capture device 120.
The network 150 may facilitate exchange of information and/or data. In some embodiments, one or more components (e.g., the first processing device 110, the capture device 120, the terminal device 130, the storage device 140, the second processing device 160) of the image processing system 100 may send information and/or data to other component (s) of the image processing system 100 via the network 150. For example, the first processing device 110 may obtain/acquire an original image from the capture device 120 via the network 150. As another example, the second processing device 160 may obtain a processing result of a first processing operation from the first processing device 110 via the network 150. In some  embodiments, the network 150 may be any type of wired or wireless network, or combination thereof. In some embodiments, the network 150 may include one or more network access points. For example, the network 150 may include wired or wireless network access points (e.g., 150-1, 150-2) , through which one or more components of the image processing system 100 may be connected to the network 150 to exchange data and/or information.
The second processing device 160 may process information and/or data to perform one or more functions described in the present disclosure. For example, second processing device 160 may obtain a processing result of a first processing operation from the first processing device 110. As another example, the second processing device 160 may determine whether a difference between a count of one or more objects in an original image and a count of processed objects is greater than a threshold based on a processing result of a first processing operation; in response to determining that the difference between the count of the one or more objects in the original image and the count of the processed objects is greater than the threshold, the second processing device 160 may select, form one or more objects, at least one second object that satisfies a second preset condition; and may perform a second processing operation on the at least one second object that satisfies the second preset condition. In some embodiments, the second processing device 160 may include one or more processing engines (e.g., single-core processing engine (s) or multi-core processor (s) ) . Merely by way of example, the second processing device 160 may include a central processing unit (CPU) , an application-specific integrated circuit (ASIC) , an application-specific instruction-set processor (ASIP) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a digital signal processor (DSP) , a field programmable gate array (FPGA) , a programmable logic device (PLD) , a controller, a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
In some embodiments, the second processing device 160 may be connected to the network 150 to communicate with one or more components (e.g., the first processing device 110, the capture device 120, the terminal device 130, the storage device 140) of the image processing system 100. In some embodiments, the second processing device 160 may be directly connected to or communicate with one or more components (e.g., the first processing device 110, the capture device 120, the terminal device 130, the storage device 140) of the image processing system 100. In some embodiments, the second processing device 160 may be a back-end processing device. For example, the second processing device 160 may be a back-end network video recorder (NVR) . In some embodiments, the second processing device 160 may be part of the storage device 140. In some embodiments, a computing capacity of the second processing device 160 may be higher than a computing capacity of the first processing device 110.
It should be noted that the image processing system 100 is merely provided for the purposes of illustration, and is not intended to limit the scope of the present disclosure. For  persons having ordinary skills in the art, multiple variations or modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, the image processing system 100 may further include a database, an information source, etc. As another example, the image processing system 100 may be implemented on other devices to realize similar or different functions.
FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure. In some embodiments, the first processing device 110, the second processing device 160, and/or the terminal device 130 may be implemented on the computing device 200.
The computing device 200 may be used to implement any component of the image processing system 100 as described herein. For example, the first processing device 110 may be implemented on the computing device 200, via its hardware, software program, firmware, or a combination thereof. Although only one such computer is shown, for convenience, the computer functions relating to the image processing as described herein may be implemented in a distributed fashion on a number of similar platforms to distribute the processing load.
The computing device 200 may include communication (COM) ports 250 connected to and from a network connected thereto to facilitate data communications. The computing device 200 may also include a processor 220, in the form of one or more logic circuits, for executing program instructions. For example, the processor 220 may include interface circuits and processing circuits therein. The interface circuits may be configured to receive electronic signals from a bus 210, wherein the electronic signals encode structured data and/or instructions for the processing circuits to process. The processing circuits may conduct logic calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus 210.
The computing device 200 may further include program storage and data storage of different forms including, for example, a disk 270, a read only memory (ROM) 230, or a random-access memory (RAM) 240, for storing various data files to be processed and/or transmitted by the computing device 200. The computing device 200 may also include program instructions stored in the ROM 230, RAM 240, and/or another type of non-transitory storage medium to be executed by the processor 220. The methods and/or processes of the present disclosure may be implemented as the program instructions. The computing device 200 may also include an I/O component 260, supporting input/output between the computer and other components. The computing device 200 may also receive programming and data via network communications.
Merely for illustration, only one processor is described in FIG. 2. Multiple processors are also contemplated, thus operations and/or steps performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors.  For example, if in the present disclosure the processor of the computing device 200 executes both operation A and operation B, it should be understood that operation A and operation B may also be performed by two different CPUs and/or processors jointly or separately in the computing device 200 (e.g., the first processor executes operation A and the second processor executes operation B, or the first and second processors jointly execute operations A and B) .
FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure. In some embodiments, the terminal device 130, the first processing device 110, and/or the second processing device 160 may be implemented on a mobile device 300.
As illustrated in FIG. 3, the mobile device 300 may include a communication unit 310, a display 320, a graphic processing unit (GPU) 330, a central processing unit (CPU) 340, an I/O 350, a memory 360, a mobile operating system (OS) 370, and a storage 390. In some embodiments, any other suitable component, including but not limited to a system bus or a controller (not shown) , may also be included in the mobile device 300.
In some embodiments, the mobile operating system 370 (e.g., iOSTM, AndroidTM, Windows PhoneTM, Harmony OS) and one or more applications 380 may be loaded into the memory 360 from the storage 390 in order to be executed by the CPU 340. The applications 380 may include a browser or any other suitable mobile app for receiving and rendering information relating to image processing or other information from the image processing system 100. User interactions with the information stream may be achieved via the I/O 350 and provided to the first processing device 110 and/or other components of the image processing system 100 via the network 150.
To implement various modules, units, and their functionalities described in the present disclosure, computer hardware platforms may be used as the hardware platform (s) for one or more of the elements described herein. A computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device. A computer may also act as a server if appropriately programmed.
FIG. 4 is a block diagram illustrating an exemplary first processing device according to some embodiments of the present disclosure. The first processing device 110 may include an obtaining module 410, an identification module 420, a processing module 430, and a transmitting module 440.
The obtaining module 410 may be configured to obtain data and/or information associated with the image processing system 100. The data and/or information associated with the image processing system 100 may include an original image, a second image, a processed image, one or more objects identified in the original image, one or more objects identified in the second image, a first processing operation, a second processing operation, a first preset condition, a second preset condition, a feature of an object, a priority corresponding to the feature of the object, a weight corresponding to the feature of the object, a static repeating object  in the second image, or the like, or any combination thereof. For example, the obtaining module 410 may obtain an original image captured by a capture device. More descriptions for obtaining the original image may be found elsewhere in the present disclosure (e.g., operation 610 in FIG. 6 and descriptions thereof) . As another example, the obtaining module 410 may obtain a second image that is adjacently captured after an original image. More descriptions for obtaining the second image may be found elsewhere in the present disclosure (e.g., operation 810 in FIG. 8A and descriptions thereof) . In some embodiments, the obtaining module 410 may obtain the data and/or information associated with the image processing system 100 from one or more components (e.g., the capture device 120, the terminal device 130, the storage device 140, the second processing device 160) of the image processing system 100 via the network 150.
The identification module 420 may be configured to identify one or more objects in an image. In some embodiments, the identification module 420 may identify one or more objects in an original image. In some embodiments, the identification module 420 may identify one or more objects in a second image. In some embodiments, the identification module 420 may identify the one or more objects in the original image or the second image based on an object detection algorithm (e.g., an inter-frame difference algorithm, a background difference algorithm, an optical flow algorithm) . More descriptions for identifying the one or more objects in the image may be found elsewhere in the present disclosure (e.g., operation 620 in FIG. 6, operation 820 in FIG. 8A, and descriptions thereof) .
The processing module 430 may be configured to process data and/or information associated with the image processing system 100. In some embodiments, the processing module 430 may perform a first processing operation on at least a first part of one or more objects in an original image. For example, the processing module 430 may determine whether a count of the one or more objects in the original image exceeds a computing capacity of the first processing device 110. In response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110, the processing module 430 may select, from the one or more objects, at least one first object that satisfies a first preset condition. The processing module 430 may perform a first processing operation on at least part of the at least one first object that satisfies the first preset condition. In response to determining that the count of the one or more objects in the original image does not exceed the computing capacity of the first processing device 110, the processing module 430 may perform the first processing operation on the one or more objects in the original image. More descriptions for performing the first processing operation on the at least the first part of the one or more objects may be found elsewhere in the present disclosure (e.g., operation 630 in FIG. 6, operations 730-760 in FIG. 7A, and descriptions thereof) .
In some embodiments, the processing module 430 may determine whether there is a static repeating object in a second image based on one or more objects in an original image and one or more objects in the second image. In response to determining that there is a static  repeating object in the second image, the processing module 430 may determine that the static repeating object satisfies a first preset condition. More descriptions for determining whether there is a static repeating object in a second image may be found elsewhere in the present disclosure (e.g., operations 830-840 in FIG. 8A and descriptions thereof) .
The transmitting module 440 may be configured to transmit data and/or information associated with the image processing system 100. In some embodiments, the transmitting module 440 may transmit a processing result of a first processing operation to the second processing device 160. More descriptions for transmit the processing result of the first processing operation to the second processing device 160 may be found elsewhere in the present disclosure (e.g., operation 770 in FIG. 7A and descriptions thereof) .
It should be noted that the above description of the first processing device 110 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, one or more modules may be combined into a single module. For example, the identification module 420 and the processing module 430 may be combined as a single module which may both identify one or more objects in an original image, and perform a first processing operation on at least a first part of the one or more objects. In some embodiments, one or more modules may be added. For example, the first processing device 110 may further include a storage module (not shown) used to store information and/or data (e.g., an original image, one or more identified objects) associated with the image processing system 100. In some embodiments, one or more modules may be omitted. For example, the transmitting module 440 may be omitted.
FIG. 5 is a block diagram illustrating an exemplary second processing device according to some embodiments of the present disclosure. The second processing device 160 may include an obtaining module 510, an identification module 520, a determination module 530, a processing module 540, and a storage module 550.
The obtaining module 510 may be configured to obtain data and/or information associated with the image processing system 100. For example, the obtaining module 510 may obtain a processing result of a first processing operation from the first processing device 110. More descriptions for obtaining the processing result of the first processing operation may be found elsewhere in the present disclosure (e.g., operation 910 in FIG. 9 and descriptions thereof) . In some embodiments, the obtaining module 510 may obtain the data and/or information associated with the image processing system 100 from one or more components (e.g., the first processing device 110, the capture device 120, the terminal device 130, the storage device 140) of the image processing system 100 via the network 150.
The identification module 520 may be configured to identify one or more objects in an image. In some embodiments, the identification module 520 may also be configured to, in response to determining that a difference between a count of one or more objects in an original image and a count of processed objects is greater than a threshold, select, form the one or more objects in the original image, at least one second object that satisfies a second preset condition. More descriptions for selecting the at least one second object that satisfies the second preset condition may be found elsewhere in the present disclosure (e.g., operation 930 in FIG. 9 and descriptions thereof) .
The determination module 530 may be configured to determine data and/or information associated with the image processing system 100. In some embodiments, the determination module 530 may determine whether a difference between a count of one or more objects in an original image and a count of processed objects is greater than a threshold based on a processing result of a first processing operation. More descriptions for determining whether the difference between the count of the one or more objects in the original image and the count of processed objects is greater than the threshold may be found elsewhere in the present disclosure (e.g., operation 920 in FIG. 9 and descriptions thereof) .
The processing module 540 may be configured to perform a second processing operation on at least a second part of one or more objects in an original image. In some embodiments, the processing module 540 may perform a second processing operation on at least one second object that satisfies a second preset condition. More descriptions for performing the second processing operation on the at least one second object that satisfies the second preset condition may be found elsewhere in the present disclosure (e.g., operation 940 in FIG. 9 and descriptions thereof) . In some embodiments, the second processing device 160 may determine at least one unprocessed object that satisfies a first preset condition based on a processing result of a first processing operation. The processing module 540 may perform a second processing operation on the at least one unprocessed object that satisfies the first preset condition. More descriptions for performing the second processing operation on the at least one unprocessed object that satisfies the first preset condition may be found elsewhere in the present disclosure (e.g., operation 950 in FIG. 9 and descriptions thereof) .
The storage module 550 may be configured to store data and/or information associated with the image processing system 100. In some embodiments, the storage module 550 may store an original image, a second image, a processed image, one or more objects identified in the original image, one or more objects identified in the second image, a first processing operation, a second processing operation, a first preset condition, a second preset condition, a feature of an object, a priority corresponding to the feature of the object, a weight corresponding to the feature of the object, a static repeating object in the second image, or the like, or any combination thereof.
It should be noted that the above description of the second processing device 160 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, one or more modules may be combined into a single module. For example, the identification module 520 and the determination module 530 may be combined as a single module. In some embodiments, one or more modules may be omitted. For example, the storage module 550 may be omitted. In some embodiments, one or more modules may be added. For example, the second processing device 160 may further include a preview module (not shown) used to preview information and/or data (e.g., an original image, one or more identified objects in the original image, a processing result of a first processing operation) associated with the image processing system 100. For illustration purposes, a user of the image processing system 100 may preview the processing result of the first processing operation via the preview module, and adjust the first processing operation based on a preview result. As another example, the second processing device 160 may further include a playback module (not shown) used to playback information and/or data (e.g., an original image, a processing result of a first processing operation) stored in a storage device (e.g., the storage device 130, the storage module 550) .
FIG. 6 is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure. The process 600 may be executed by the image processing system 100. For example, the process 600 may be implemented as a set of instructions stored in the storage 390. The processor 220 and/or the module (s) in FIG. 4 and FIG. 5 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the module (s) may be configured to perform the process 600. The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 600 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 600 as illustrated in FIG. 6 and described below is not intended to be limiting.
In 610, the first processing device 110 (e.g., the obtaining module 410) may obtain an original image captured by a capture device.
The original image may be a static image, a series of image frames, a video, etc. The original image may be a two-dimensional image, a three-dimensional image, a four-dimensional image, etc. The original image may further include voice information associated with the original image.
In some embodiments, the first processing device 110 may obtain the original image from the capture device (e.g., the capture device 120) periodically (e.g., per second, per 2 seconds, per 5 seconds, per 10 seconds) or in real time. In some embodiments, during the capturing of the original image, the capture device 120 may transmit the original image to a  storage device (e.g., the storage device 140) periodically (e.g., per second, per 2 seconds, per 5 seconds, per 10 seconds) or in real time via the network 150. Further, the first processing device 110 may access the storage device and retrieve the original image. In some embodiments, the first processing device 110 may be a processing device of the capture device.
In some embodiments, the original image may be an original video stream captured by the capture device. In some embodiments, the capture device may obtain the original video stream, and process the original video code stream into multiple original images frame by frame.
In 620, the first processing device 110 (e.g., the identification module 420) may identify one or more objects in the original image.
In some embodiments, the one or more objects may refer to information associated with privacy that needs to be processed (e.g., masked, coded, blurred, cut) in the original image. In some embodiments, the one or more objects may include a person, an animal, a facility, or a part thereof, etc. For example, the one or more objects may include face information (e.g., a face portion of a person) , identification information (e.g., a name, an ID number) , working information (e.g., an occupation, a working address) , or the like, or any combination thereof.
In some embodiments, the first processing device 110 may detect the one or more objects in the original image according to an object detection algorithm (e.g., an inter-frame difference algorithm, a background difference algorithm, an optical flow algorithm) .
For example, the one or more objects may include a face of a person. The first processing device 110 may detect the face in the original image according to one or more face detection algorithms. Exemplary face detection or recognition algorithms may include a knowledge-based technique, a feature-based technique, a template matching technique, an eigenface-based technique, a distribution-based technique, a neural-network based technique, a support vector machine (SVM) based technique, a sparse network of winnows (SNoW) based technique, a naive bayes classifier, a hidden markov model, an information theoretical algorithm, an inductive learning technique, or the like, or any combination thereof.
As another example, the one or more objects may include identification information and/or working information of a person. The first processing device 110 may recognize the identification information and/or the working information in the original image according to one or more text recognition algorithms. Exemplary text recognition algorithms may include a template algorithm, an indicative algorithm, a structural recognition algorithm, an artificial neural network, or the like, or any combination thereof.
In 630, the first processing device 110 (e.g., the processing module 430) may perform a first processing operation on at least a first part of the one or more objects.
In some embodiments, the first processing operation may be an image processing operation. For example, the first processing operation may include a masking operation, a coding operation, a blurring operation, a cutting operation, or the like, or any combination thereof. In some embodiments, the first processing device 110 may perform the first processing operation  on the at least the first part of the one or more objects by adding a letter, a number, a pattern, or the like, or any combination thereof, on the at least the first part of the one or more objects. In some embodiments, the first processing device 110 may perform the first processing operation on the at least the first part of the one or more objects by replacing the at least the first part of the one or more objects with a letter, a number, a pattern, or the like, or any combination thereof. In some embodiments, the first processing device 110 may perform the first processing operation on the at least the first part of the one or more objects by coding or blurring the at least the first part of the one or more objects. In some embodiments, the first processing device 110 may perform the first processing operation on the at least the first part of the one or more objects by cutting the at least the first part of the one or more objects.
For example, the one or more objects may include one or more human faces. The first processing device 110 may process at least one human face (or a portion thereof (e.g., an area including eyes) ) by replacing the at least one human face or the portion thereof with a default pattern or a random pattern, or process the at least one human face (or a portion thereof (e.g., an area including eyes) ) by blurring or cutting the at least one human face in the original image. As another example, the one or more objects may include identification information (e.g., an ID number) of a person. The identification information may include a plurality of characters (e.g., a mark, a sign, a symbol, a letter, a Chinese character) . The first processing device 110 may process (e.g., modify) one or more characters of the plurality of characters of the identification information. Specifically, the first processing device 110 may replace one or more characters of the plurality of characters of the identification information with default value (s) or random value (s) , or code (or blur) the one or more characters of the plurality of characters of the identification information.
In some embodiments, due to a limited computing capacity of the first processing device 110, when a computing capacity required to process all the identified object (s) in the original image exceeds a computing capacity of the first processing device 110, the first processing device 110 may select the first part of the one or more objects (e.g., at least one first object) , and perform the first processing operation on the first part of the one or more objects. For example, the first processing device 110 may determine whether a count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110. In response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110, the first processing device 110 may select, from the one or more objects, at least one first object that satisfies a first preset condition. Then the first processing device 110 may perform the first processing operation on at least part of the at least one first object that satisfies the first preset condition. In response to determining that the count of the one or more objects in the original image does not exceed the computing capacity of the first processing device 110, the first processing device 110 may perform the first processing operation on the one or more objects in the original image. More descriptions for  performing the first processing operation may be found elsewhere in the present disclosure (e.g., FIG. 7A and descriptions thereof) .
In some embodiments, after the first processing operation is performed on the at least the first part of the one or more objects, the first processing device 110 may generate a processing result of the first processing operation, and transmit the processing result of the first processing operation to the second processing device 160. In some embodiments, the processing result of the first processing operation may include a processed image. The processed image may include processed object (s) and unprocessed object (s) in the original image. For example, the processing result of the first processing operation may include information associated with object (s) that do not satisfy the first preset condition, information associated with processed object (s) that satisfy the first preset condition, information associated with unprocessed object (s) that satisfy the first preset condition, or the like, or any combination thereof. More description of the processing result of the first processing operation may be found elsewhere in the present disclosure (e.g., FIG. 7A and descriptions thereof) .
In 640, the second processing device 160 (e.g., the processing module 540) may perform a second processing operation on at least a second part of the one or more objects.
In some embodiments, the second processing operation may be the same as or different from the first processing operation. For example, the second processing operation may include a masking operation, a coding operation, a blurring operation, a cutting operation, or the like, or any combination thereof.
In some embodiments, the second processing device 160 may obtain the processing result of the first processing operation from the first processing device 110. The second processing device 160 may determine at least one unprocessed object that satisfies the first preset condition based on the processing result of the first processing operation. The second processing device 160 may perform the second processing operation on the at least one unprocessed object that satisfies the first preset condition.
In some embodiments, the second processing device 160 may determine whether a difference between the count of the one or more objects in the original image and a count of processed objects is greater than a threshold based on the processing result of the first processing operation. In response to determining that the difference between the count of the one or more objects in the original image and the count of the processed objects is greater than the threshold, the second processing device 160 may select, form the one or more objects, at least one second object that satisfies a second preset condition. Then the second processing device 160 may perform the second processing operation on the at least one second object that satisfies the second preset condition.
More descriptions for performing the second processing operation may be found elsewhere in the present disclosure (e.g., FIG. 9 and descriptions thereof) .
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure.
FIG. 7A is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure. The process 700 may be executed by the image processing system 100. For example, the process 700 may be implemented as a set of instructions stored in the storage 390. The processor 220 and/or the module (s) in FIG. 4 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the module (s) may be configured to perform the process 700. The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 700 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 700 as illustrated in FIG. 7A and described below is not intended to be limiting.
In 710, the first processing device 110 (e.g., the obtaining module 410) may obtain an original image captured by a capture device.
Operation 710 may be performed in a similar manner as operation 610 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
In 720, the first processing device 110 (e.g., the identification module 420) may identify one or more objects in the original image.
Operation 720 may be performed in a similar manner as operation 620 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
In 730, the first processing device 110 (e.g., the processing module 430) may determine whether a count of the one or more objects in the original image exceeds a computing capacity of the first processing device 110.
In some embodiments, the first processing device 110 may determine a maximum count of objects that the first processing device 110 is able to process (e.g., mask) based on the computing capacity of the first processing device 110. The first processing device 110 may determine whether the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110 by determining whether the count of the one or more objects in the original image exceeds the maximum count of objects that the first processing device 110 is able to process. In response to determining that the count of the one or more objects in the original image exceeds the maximum count of objects that the first processing device 110 is able to process, the first processing device 110 may determine that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110. In response to determining that the count of the one or more objects in the original image does not exceed the maximum count of objects that the first processing device  110 is able to process, the first processing device 110 may determine that the count of the one or more objects in the original image does not exceed the computing capacity of the first processing device 110.
In response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110, process 700 may proceed to operation 740.
In response to determining that the count of the one or more objects in the original image does not exceed the computing capacity of the first processing device 110, process 700 may proceed to operation 760.
In 740, in response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110, the first processing device 110 (e.g., the processing module 430) may select, from the one or more objects, at least one first object that satisfies a first preset condition.
In some embodiments, after the one or more objects are identified in the original image, the first processing device 110 may extract feature (s) of each of the one or more objects. In some embodiments, the feature of the object may include an angle of the object, an image quality of the object, a distance between the object and the capture device, a type of the object, or the like, or any combination thereof.
As used herein, an angle of an object refers to a deflection angle between a surface of the object (e.g., a front surface of a human body of a person) and an image plane of an image (e.g., the original image) . In some embodiments, the deflection angle may be represented by a deflection direction (e.g., an upward deflection, a downward deflection, a left deflection, a right deflection) and a deflection angle value (e.g., 20 degrees, 60 degrees) . For example, the angle of the object may be deflecting 20 degrees in the upward deflection, deflecting 65 degrees in the right deflection, etc. As used herein, an image quality of an object refers to a clarity of the object in an image (e.g., the original image) . In some embodiments, the image quality of the object may be represented by a count of pixels per unit area of an image area of the object. As used herein, a distance between an object and a capture device refers to a straight-line distance from the object to a specific point (e.g., a center point) of the capture device. The type of the object may include an infant, a child, an adolescent, an adult, etc. The type of the object may also include a visitor, an internal staff, etc.
In some embodiments, the first processing device 110 may extract the feature (s) of the object by analyzing the object in the original image. For example, the first processing device 110 may determine the angle of the object, the image quality of the object, the distance between the object and the capture device, and/or the type of the object based on a machine learning model (e.g., a neural network model, a regression model, a classification tree model) . As another example, the first processing device 110 may determine the distance between the object and the capture device based on image coordinates of the object in an image coordinate system  and parameters (e.g., intrinsic parameters, extrinsic parameters) of the capture device. As used herein, an image coordinate system refers to a coordinate system that describes positions of an object in an image captured by a capture device. In the image coordinate system, an origin may be an upper-left corner point of the image. The X-axis may be from a left side to a right side of the image, and the Y-axis may be from an upper side to a lower side of the image.
In some embodiments, the first preset condition may include that the angle of the object is less than a first angle threshold, the image quality of the object is greater than a first quality threshold, the distance between the object and the capture device is less than a first distance threshold, the type of the object belongs to a preset type, or the like, or any combination thereof.
In some embodiments, the first preset condition (e.g., the first angle threshold, the first quality threshold, the first distance threshold, the preset type) may be manually set by a user of the image processing system 100, or automatically set by one or more components (e.g., the first processing device 110) of the image processing system 100 according to different situations. In some embodiments, the first preset condition may be determined based on a monitoring requirement, a monitoring environment, etc. For example, if the monitoring environment is a public place, such as a bus station, a subway, the required accuracy of image processing (e.g., masking the one or more objects in the original image) for monitoring may be relatively high, and the first angle threshold may be set relatively large, the first quality threshold may be set relatively low, and/or the first distance threshold may be set relatively large.
In 750, the first processing device 110 (e.g., the processing module 430) may perform a first processing operation on at least part of the at least one first object that satisfies the first preset condition.
In some embodiments, the first processing device 110 may determine whether a count of the at least one first object that satisfies the first preset condition exceeds the computing capacity of the first processing device 110. In response to determining that the count of the at least one first object that satisfies the first preset condition does not exceed the computing capacity of the first processing device 110, the first processing device 110 may perform the first processing operation on the at least one first object that satisfies the first preset condition. In response to determining that the count of the at least one first object exceeds the computing capacity of the first processing device 110, the first processing device 110 may perform the first processing operation on a part of the at least one first object based on at least one feature of each of the at least one first object.
In some embodiments, the first processing device 110 may perform the first processing operation on the part of the at least one first object based on at least one priority corresponding to the at least one feature respectively. As used herein, a priority corresponding to a feature refers to a priority of performing a first operation on an object including the feature. The priority corresponding to the feature may indicate an importance of the feature in the first processing operation. In some embodiments, a priority corresponding to “angle” may be higher than a  priority corresponding to “image quality. ” In some embodiments, a priority corresponding to “distance” may be higher than the priority corresponding to “image quality. ” In some embodiments, the priority corresponding to “angle” > the priority corresponding to “distance” >the priority corresponding to “image quality. ” In some embodiments, the priority corresponding to “angle” > the priority corresponding to “image quality” > the priority corresponding to “distance. ”
For example, it is assumed that the priority corresponding to “angle” > the priority corresponding to “image quality” > the priority corresponding to “distance, ” for the at least one first object that satisfies the first present condition, the first processing device 110 may determine whether a count of first object (s) with an angle of the first object less than the first angle threshold exceeds the computing capacity of the first processing device 110. In response to determining that the count of first object (s) with the angle of the first object less than the first angle threshold exceeds the computing capacity of the first processing device 110, the first processing device 110 may determine a ranking of the first object (s) with the angle of the first object less than the first angle threshold according to the angle of each of the first object (s) in an ascending order. Merely for illustration purpose, if an angle of a first object A is 10 degrees, an angle of a first object B is 20 degrees, and an angle of a first object C is 30 degrees, the first processing device 110 may determine a ranking of the first objects as: first object A, first object B, first object C. Further, the first processing device 110 may perform the first processing operation on one or more of the first object (s) with the angle of the first object less than the first angle threshold according to the ranking of the first object (s) based on the computing capacity of the first processing device 110.
In response to determining that the count of the first object (s) with the angle of the first object less than the first angle threshold does not exceed the computing capacity of the first processing device 110, the first processing device 110 may perform the first processing operation on the first object (s) with the angle of the first object less than the first angle threshold. The first processing device 110 may then determine whether a count of first object (s) with the image quality of the first object greater than the first quality threshold exceeds the remaining computing capacity of the first processing device 110. Similarly, in response to determining that the count of the first object (s) with the image quality of the first object greater than the first quality threshold exceeds the remaining computing capacity of the first processing device 110, the first processing device 110 may determine a ranking of the first object (s) with the image quality of the first object greater than the first quality threshold according to the image quality of each of the first object (s) in a descending order. The first processing device 110 may perform the first processing operation on one or more of the first object (s) with the image quality of the first object greater than the first quality threshold according to the ranking of the first object (s) based on the remaining computing capacity of the first processing device 110.
In response to determining that the count of the first object (s) with the image quality of the first object greater than the first quality threshold does not exceed the remaining computing capacity of the first processing device 110, the first processing device 110 may further determine whether a count of first object (s) with the distance between the first object and the capture device less than the first distance threshold exceeds the remaining computing capacity of the first processing device 110. Similarly, in response to determining that the count of the first object (s) with the distance between the first object and the capture device less than the first distance threshold exceeds the remaining computing capacity of the first processing device 110, the first processing device 110 may determine a ranking of the first object (s) with the distance between the first object and the capture device less than the first distance threshold according to the distance between each of the first object (s) and the capture device in an ascending order. The first processing device 110 may perform the first processing operation on one or more of the first object (s) with the distance between the first object and the capture device less than the first distance threshold according to the ranking of the first object (s) based on the remaining computing capacity of the first processing device 110.
In some embodiments, the first processing device 110 may determine at least one weight corresponding to the at least one feature respectively. As used herein, a weight corresponding to a feature of an object may indicate an importance of the feature in the first processing operation. Further, for each of the at least one first object, the first processing device 110 may determine a weighted result based on the at least one weight corresponding to the at least one feature respectively and feature value (s) of the at least feature of the first object.
In some embodiments, the first processing device 110 may determine the feature value (s) according to a mapping approach. Take “distance” as an example, the first processing device 110 may define a specific range (e.g., 0~1) , map the distance between the first object and the capture device into the specific range, and determine a feature value corresponding to “distance” based on a mapping value corresponding to the distance. For example, it is assumed that the distance between the first object and the capture device is larger than a first threshold (e.g., 5 m) , the first processing device 110 may determine a corresponding mapping value as 0; it is assumed that the distance between the first object and the capture device is smaller than a second threshold (e.g., 0.5 m) , the first processing device 110 may determine a corresponding mapping value as 1. Accordingly, the smaller the distance between the first object and the capture device is, the larger the corresponding mapping value may be. The first threshold and/or the second threshold may be default settings of the image processing system 100 or may be adjustable under different situations.
The processing deice 120 may then determine a weighted result for the first object based on the feature value (s) of the first object and weight (s) corresponding to the feature (s) . Merely for illustration purpose, it is assumed that a first weight corresponding to a first feature (e.g., “angle” ) is W1, a second weight corresponding to a second feature (e.g., “image quality” ) is  W2, a third weight corresponding to a third feature (e.g., “distance” ) is W3, a value of the first feature of a first object A is A1, a value of the second feature of the first object A is A2, a value of the third feature of the first object A is A3, a value of the first feature of a first object B is B1, a value of the second feature of the first object B is B2, a value of the third feature of the first object B is B3, a value of the first feature of a first object C is C1, a value of the second feature of the first object C is C2, and a value of the third feature of the first object C is C3, then the first processing device 110 may determine that a weighted result for first object A is WA=A1×W1+ A2 ×W2+A3×W3, a weighted result for first object B is WB=B1×W1+B2×W2+B3×W3, and a weighted result for first object C is WC=C1×W1+ AC×W2+AC×W3.
Furthermore, the first processing device 110 may perform the first processing operation on the part of the at least one first object that satisfies the first preset condition based on the weighted result (s) . For example, the first processing device 110 may determine a ranking of the at least one first object that satisfies the first preset condition according to the weighted result for each of the at least one first object in a descending order. The first processing device 110 may perform the first processing operation on one or more of the at least one first object that satisfies the first preset condition according to the ranking of the at least one first object based on the computing capacity of the first processing device 110.
In 760, in response to determining that the count of the one or more objects in the original image does not exceed the computing capacity of the first processing device 110, the first processing device 110 (e.g., the processing module 430) may perform the first processing operation on the one or more objects in the original image.
For example, in response to determining that the count of the one or more objects in the original image does not exceed the computing capacity of the first processing device 110, the processing device 110 may perform a masking operation on all of the one or more objects in the original image.
In 770, the first processing device 110 (e.g., the transmitting module 440) may transmit a processing result of the first processing operation to a second processing device.
In some embodiments, the processing result of the first processing operation may include information associated with the one or more objects in the original image. For example, the processing result of the first processing operation may include information associated with object (s) that do not satisfy the first preset condition, information associated with processed object (s) that satisfy the first preset condition, information associated with unprocessed object (s) that satisfy the first preset condition, or the like, or any combination thereof.
In some embodiments, the information associated with each of the one or more objects in the original image may include the at least one feature of the object, coordinate information of the object, mark information of the object, or the like, or any combination thereof. The coordinate information of the object may include image coordinates of the object in the image coordinate system. The mark information of the object may indicate whether the object satisfies  the first preset condition, whether the first processing operation is performed on the object, or the like, or any combination thereof. For example, an object with a mark of “0” may indicate that the object does not satisfy the first preset condition, an object with a mark of “10” may indicate that the object satisfies the first preset condition and the first processing operation is not performed on the object, and an object with a mark of “11” may indicate that the object satisfies the first preset condition and the first processing operation is performed on the object.
In some embodiments, the first processing device 110 may transmit the original image, a processed image (i.e., an image obtained after the first processing operation is performed on the original image) , and the processing result of the first processing operation to the second processing device 160.
In some embodiments, after obtaining the processing result of the first processing operation, the second processing device 160 may determine at least one unprocessed object that satisfies the first preset condition based on the processing result of the first processing operation. For example, the second processing device 160 may determine at least one unprocessed object that satisfies the first preset condition based on the mark information of each of the one or more objects. As another example, the second processing device 160 may determine the at least one unprocessed object that satisfies the first preset condition based on the at least one feature of each of the one or more objects. Further, the second processing device 160 may perform a second processing operation (e.g., a masking operation) on the at least one unprocessed object that satisfies the first preset condition. For example, the second processing device 160 may perform the second processing operation on the at least one unprocessed object that satisfies the first preset condition based on coordinate information of each of the at least one unprocessed object.
In some embodiments, the second processing device 160 may determine whether a difference between the count of the one or more objects in the original image and a count of processed objects is greater than a threshold based on the processing result of the first processing operation. In response to determining that the difference between the count of the one or more objects in the original image and the count of the processed objects is greater than the threshold, the second processing device 160 may select, form the one or more objects, at least one second object that satisfies a second preset condition. The second processing device 160 may perform the second processing operation on the at least one second object that satisfies the second preset condition. More descriptions for performing the second processing operation on the at least one second object that satisfies the second preset condition may be found elsewhere in the present disclosure (e.g., FIG. 9 and descriptions thereof) .
According to some embodiments of the present disclosure, in response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110, the at least one first object that satisfies the first preset condition may be selected from the one or more objects, and the first processing operation may be  performed on the at least part of the at least one first object that satisfies the first preset condition. In the case of limited computing capacity of the first processing device, a part of the identified objects in the original image may be processed selectively and effectively by the first processing device, and the other part of the identified objects in the original image may be processed by the second processing device, which can prevent the identified object in the original image from being unprocessed due to the limited computing capacity of the first processing device, and improve the accuracy and efficiency of image processing. For example, the first processing operation may be performed on object (s) that satisfy the first preset condition (e.g., object (s) with a relatively small angle of the object, a relatively high image quality of the object, and/or a relatively short distance between the object and the capture device) by the first processing device 110, and object (s) that do not satisfy the first preset condition may not need to be processed or may be processed by the second processing device 160, to save the computing capacity of the first processing device 110. In addition, for a same object (e.g., a same person) appeared in different images, when features of the person in different images change, the processing methods of the object in different images may be different, which can achieve a dynamic image processing.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the first processing device 110 may determine a priority corresponding to each type of feature information in the feature of the object respectively. For example, for the feature of the type of the object, a priority corresponding to “infant (or a child) ” may be higher than a priority corresponding to “adult. ” As another example, a priority corresponding to “visitor” may be higher than a priority corresponding to “internal staff. ” 
FIG. 7B is a schematic diagram illustrating an exemplary process for performing a first processing operation on a plurality of objects based on priorities corresponding to features respectively according to some embodiments of the present disclosure.
As shown in FIG. 7B, a first priority corresponding to a feature A (e.g., “angle” ) is higher than a second priority corresponding to a feature B (e.g., “image quality” ) , and the second priority corresponding to the feature B is higher than a third priority corresponding to a feature C (e.g., “distance” ) . The first processing device 110 may first perform a first processing operation on object A, object B, and object C that satisfy a preset condition associated with the feature A (e.g., a condition that the angle of the object is less than a first angle threshold) . The first processing device 110 may then perform the first processing operation on object D and object E that satisfy a preset condition associated with the feature B (e.g., a condition that the image quality of the object is greater than a first quality threshold) . The first processing device 110 may further perform the first processing operation on object F and object G that satisfy a preset condition  associated with the feature C (e.g., a condition that the distance between the object and the capture device is less than a first distance threshold) . A count of processed objects by the first processing device 110 may be determined based on a computing capacity of the first processing device 110.
FIG. 7C is a schematic diagram illustrating an exemplary process for performing a first processing operation on a plurality of objects based on a weighted result for each of the plurality of objects according to some embodiments of the present disclosure.
As shown in FIG. 7C, a first weight corresponding to a feature A (e.g., “angle” ) is 0.3, a second weight corresponding to feature B (e.g., “image quality” ) is 0.5, a third weight corresponding to a feature C (e.g., “distance” ) is 0.2. A value of the feature A of an object A is 0.5, a value of the feature B of the object A is 0.6, a value of the feature C of the object A is 0.2, and the first processing device 110 may determine that a weighted result for object A is 0.49 (i.e., 0.5×0.3+ 0.6×0.5+0.2×0.2= 0.49) . A value of the feature A of an object B is 0.1, a value of the feature B of the object B is 0.5, a value of the feature C of the object B is 0.9, and the first processing device 110 may determine that a weighted result for object B is 0.46 (i.e., 0.1×0.3+0.5×0.5+0.9×0.2= 0.46) . A value of the feature A of an object C is 0.6, a value of the feature B of the object C is 0.7, a value of the feature C of the object C is 0.1, and the first processing device 110 may determine that a weighted result for object C is 0.55 (i.e., 0.6×0.3+ 0.7×0.5+0.1 ×0.2= 0.55) .
The first processing device 110 may then determine a ranking of the plurality of objects (i.e., object A, object B, and object C) based on the weighted result for each of the plurality of objects in a descending order. For example, the first processing device 110 may determine the ranking of the plurality of objects as: object C > object A > object B. The first processing device 110 may further perform a first processing operation on one or more of the plurality of objects according to the ranking of the plurality of objects based on a computing capacity of the first processing device 110.
FIG. 8A is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure. The process 800 may be executed by the image processing system 100. For example, the process 800 may be implemented as a set of instructions stored in the storage 390. The processor 220 and/or the module (s) in FIG. 4 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the module (s) may be configured to perform the process 800. The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 800 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 800 as illustrated in FIG. 8A and described below is not intended to be limiting.
In 810, the first processing device 110 (e.g., the obtaining module 410) may obtain a second image that is adjacently captured after an original image.
In some embodiments, the original image and the second image may be continuous in time. As used herein, “an image A and an image B being continuous in time” refers that a time difference between a first time point corresponding to the image A and a second time point corresponding to the image B is less than a threshold. As used herein, “a time point corresponding to an image” refers to a time point when the image is acquired (e.g., by a capture device) . For example, the original image and the second image may be consecutive image frames in a video.
In 820, the first processing device 110 (e.g., the identification module 420) may identify one or more objects in the second image.
Operation 820 may be performed in a similar manner as operation 620 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
In 830, the first processing device 110 (e.g., the processing module 430) may determine whether there is a static repeating object in the second image based on one or more objects in the original image and the one or more objects in the second image.
In some embodiments, the first processing device 110 may extract feature information of each of the one or more objects in the original image. The feature information may include a shape (e.g., a contour, an area, a height, a width, a ratio of height to width) , a color, a texture, or the like, or any combination thereof, of the object or a portion of the object, such as a face component (e.g., eyes, the nose, the mouth) of the object. For example, the first processing device 110 may detect a human face in the original image according to one or more face detection algorithms as described elsewhere in the present disclosure. The first processing device 110 may extract the feature information of the human face according to one or more feature extraction algorithms. Exemplary feature extraction algorithms may include a principal component analysis (PCA) , a linear discriminant analysis (LDA) , an independent component analysis (ICA) , a multi-dimensional scaling (MDS) algorithm, a discrete cosine transform (DCT) algorithm, or the like, or any combination thereof.
The first processing device 110 may then extract feature information of each of the one or more objects in the second image. The extraction of the feature information of the object in the second image may be performed in a similar manner as that of the feature information of the object in the original image.
The first processing device 110 may further determine whether there is a static repeating object in the second image based on the feature information of each of the one or more objects in the original image and the feature information of each of the one or more objects in the second image. For example, for each object in the second image, the first processing device 110 may determine a degree of similarity between the object in the second image and each object in the original image based on the feature information of the object in the second image and the feature information of the each object in the original image. In response to determining that a degree of similarity between an object in the second image and an object in  the original image is greater than a similarity threshold (e.g., 95%, 99%) , the first processing device 110 may determine the object in the second image as the static repeating object in the second image.
A degree of similarity between an object A (e.g., an object in the original image) and an object B (e.g., an object in the second image) may be determined by various approaches. Merely by way of example, the first processing device 110 may determine a first feature vector representing the feature information of the object A (also referred to as the first feature vector corresponding to the object A) . The first processing device 110 may determine a second feature vector representing the feature information of the object B (also referred to as the second feature vector corresponding to the object B) . The first processing device 110 may determine the degree of similarity between the object A and the object B by determining a degree of similarity between the first feature vector and the second feature vector. A degree of similarity between two feature vectors may be determined based on a similarity algorithm, for example, a Euclidean distance algorithm, a Manhattan distance algorithm, a Minkowski distance algorithm, a cosine similarity algorithm, a Jaccard similarity algorithm, a Pearson correlation algorithm, or the like, or any combination thereof.
In some embodiments, the first processing device 110 may determine whether there is a static repeating object in the second image based on the one or more objects in the original image and the one or more objects in the second image using a trained machine learning model (e.g., a neural network model, a regression model, a classification tree model) .
FIG. 8B is a schematic diagram illustrating an exemplary static repeating object according to some embodiments of the present disclosure. As shown in FIG. 8B, a first image 801 and a second image 802 are adjacent images captured by a capture device (e.g., the capture device 120) . An object A and an object B are identified in the first image 801. An object C and an object D are identified in the second image 802. The object C is a static repeating object in the second image 802 corresponding to the object A in the first image 801.
In 840, in response to determining that there is a static repeating object in the second image, the first processing device 110 (e.g., the processing module 430) may determine that the static repeating object satisfies a first preset condition (e.g., the first present condition as described in connection with operation 750 in FIG. 7A) .
In some embodiments, in response to determining that there is a static repeating object in the second image, the first processing device 110 may obtain a processing result of a first processing operation associated with the original image. The first processing device 110 may determine whether an object in the original image corresponding to the static repeating object in the second image satisfies the first preset condition based on the processing result of the first processing operation (e.g., mark information of the object in the original image) associated with the original image. In response to determining that the object in the original image corresponding to the static repeating object in the second image satisfies the first preset  condition, the first processing device 110 may determine that the static repeating object satisfies the first preset condition. Further, the first processing device 110 may perform the first processing operation on the static repeating object in the second image.
According to some embodiments of the present disclosure, a determination may be made as whether there is a static repeating object in the second image, and in response to determining that there is a static repeating object in the second image, the static repeating object may be processed based on the processing result of the first processing operation associated with the original image. For example, if the object in the original image corresponding to the static repeating object in the second image satisfies the first preset condition, the static repeating object in the second image may also be considered to satisfies the first preset condition. Therefore, it is not necessary to determine whether the static repeating object in the second image satisfies the first preset condition, which can save time of image processing, and improve the efficiency of image processing.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure.
FIG. 9 is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure. The process 900 may be executed by the image processing system 100. For example, the process 900 may be implemented as a set of instructions stored in the storage 390. The processor 220 and/or the module (s) in FIG. 5 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the module (s) may be configured to perform the process 900. The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 900 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 900 as illustrated in FIG. 9 and described below is not intended to be limiting.
In 910, the second processing device 160 (e.g., the obtaining module 510) may obtain a processing result of a first processing operation from a first processing device.
In some embodiments, the processing result of the first processing operation may include information associated with one or more objects in an original image, as described in connection with operation 770 in FIG. 7A. In some embodiments, the second processing device 160 may obtain the original image, a processed image (i.e., an image obtained after the first processing operation is performed on the original image) , and the processing result of the first processing operation from the first processing device 110 via the network 150.
In 920, the second processing device 160 (e.g., the determination module 530) may determine whether a difference between a count of one or more objects in an original image and  a count of processed objects is greater than a threshold based on the processing result of the first processing operation.
In some embodiments, the second processing device 160 may determine the count of the one or more objects in the original image and the count of the processed objects based on the processing result of the first processing operation. The second processing device 160 may determine the difference between the count of the one or more objects in the original image and the count of the processed objects by subtracting the count of the processed objects from the count of the one or more objects in then original image. In some embodiments, the threshold may be manually set by a user of the image processing system 100, or automatically set by one or more components (e.g., the first processing device 110, the second processing device 160) of the image processing system 100 according to different situations.
In 930, in response to determining that the difference between the count of the one or more objects in the original image and the count of processed objects is greater than the threshold, the second processing device 160 (e.g., the identification module 520) may select, form the one or more objects, at least one second object that satisfies a second preset condition.
In some embodiments, in response to determining that the difference between the count of the one or more objects in the original image and the count of the processed objects is greater than the threshold, it may indicate that the count of the one or more objects in the original image far exceeds a computing capacity of the first processing device 110, that is, the count of the one or more objects in the original image is far greater than the maximum count of objects that the first processing device 110 is able to process.
The second processing device 160 may then select, form the one or more objects, the at least one second object that satisfies the second preset condition based on at least one feature of each of the one or more objects. In some embodiments, the second preset condition may be different from the first preset condition. For example, the second preset condition may include that an angle of the object is less than a second angle threshold, an image quality of the object is greater than a second quality threshold, a distance between the object and a capture device is less than a second distance threshold, or the like, or any combination thereof. The second angle threshold may be greater than a first angle threshold in the first preset condition. The second quality threshold may be less than a first quality threshold in the first preset condition. The second distance threshold may be greater than a first distance threshold in the first preset condition. Accordingly, when the computing capacity required to process all identified object (s) in the original image exceeds the computing capacity of the first processing device 110, the first processing device 110 may perform the first processing operation on a part of the identified object (s) in the original image that satisfies the first preset condition, and the second processing device 160 may perform the second processing operation on the other part of the identified object (s) in the original image that satisfies the second preset condition, and the  second preset condition may be different from the first preset condition, which can ensure the effective protection of sensitive and private information associated with the original image.
In 940, the second processing device 160 (e.g., the processing module 540) may perform a second processing operation on the at least one second object that satisfies the second preset condition.
In some embodiments, the second processing device 160 may perform a masking operation, a coding operation, a blurring operation, a cutting operation, or the like, or any combination thereof, on the at least one second object that satisfies the second preset condition.
In 950, in response to determining that the difference between the count of the one or more objects in the original image and the count of processed objects is not greater than the threshold, the second processing device 160 (e.g., the processing module 540) may perform the second processing operation on at least one unprocessed object that satisfies a first preset condition.
In some embodiments, the second processing device 160 may determine the at least one unprocessed object that satisfies the first preset condition based on the processing result of the first processing operation. For example, the second processing device 160 may determine the at least one unprocessed object that satisfies the first preset condition based on mark information of the one or more objects. As another example, the second processing device 160 may determine the at least one unprocessed object that satisfies the first preset condition based on at least one feature of each of the one or more objects. Further, the second processing device 160 may perform the second processing operation (e.g., a masking operation) on the at least one unprocessed object that satisfies the first preset condition. For example, the second processing device 160 may perform the second processing operation on the at least one unprocessed object that satisfies the first preset condition based on coordinate information of the at least one unprocessed object.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure.
FIG. 10 is a schematic diagram illustrating an exemplary processing result of a first processing operation according to some embodiments of the present disclosure.
As shown in FIG. 10, a processing result 1000 of a first processing operation associated with an original image may include one or more objects (e.g., an object C, an object D) that do not satisfy the a preset condition, one or more processed objects (e.g., an object A, an object B, an object F) that satisfy the first preset condition, and one or more unprocessed objects (e.g., an object E, an object G) that satisfy the first preset condition. For example, the object C and the object D are marked with marks “0” which indicates that the object C and the object D do not  satisfy the first preset condition. The object A, the object B, and the object F are marked with marks “11” which indicates that the object A, the object B, and the object F satisfy the first preset condition, and the first processing operation is performed on the object A, the object B, and the object F. The object E and the object G are marked with marks “10” which indicates that the object E and the object G satisfy the first preset condition, and the first processing operation is not performed on the object E and the object G.
FIG. 11 is a flowchart illustrating an exemplary process for image processing according to some embodiments of the present disclosure. The process 1100 may be executed by the image processing system 100. For example, the process 1100 may be implemented as a set of instructions stored in the storage 390. The processor 220 and/or the module (s) in FIG. 4 and FIG. 5 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the module (s) may be configured to perform the process 1100. The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 1100 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 1100 as illustrated in FIG. 11 and described below is not intended to be limiting.
In 1101, the first processing device 110 (e.g., the obtaining module 410) may obtain an original image captured by a capture device.
Operation 1101 may be performed in a similar manner as operation 610 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
In 1102, the first processing device 110 (e.g., the identification module 420) may identify one or more objects in the original image.
Operation 1102 may be performed in a similar manner as operation 620 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
In 1103, the first processing device 110 (e.g., the processing module 430) may determine whether a count of the one or more objects in the original image exceeds a computing capacity of the first processing device 110.
Operation 1103 may be performed in a similar manner as operation 730 as described in connection with FIG. 7A, and the descriptions thereof are not repeated here.
In 1104, in response to determining that the count of the one or more objects in the original image does not exceed the computing capacity of the first processing device 110, the first processing device 110 (e.g., the processing module 430) may perform a first processing operation on the one or more objects in the original image.
Operation 1104 may be performed in a similar manner as operation 760 as described in connection with FIG. 7A, and the descriptions thereof are not repeated here.
After performing the first processing operation on the one or more objects in the original image, the first processing device 110 may transmit a processing result of the first processing operation to a second processing device 160.
In 1105, in response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110, the first processing device 110 (e.g., the processing module 430) may select, from the one or more objects, at least one first object that satisfies a first preset condition.
Operation 1105 may be performed in a similar manner as operation 740 as described in connection with FIG. 7A, and the descriptions thereof are not repeated here.
In 1106, the first processing device 110 (e.g., the processing module 430) may determine whether a count of the at least one first object that satisfies the first preset condition exceeds the computing capacity of the first processing device.
Operation 1106 may be performed in a similar manner as operation 750 as described in connection with FIG. 7A, and the descriptions thereof are not repeated here.
In 1107, in response to determining that the count of the at least one first object that satisfies the first preset condition does not exceed the computing capacity of the first processing device 110, the first processing device 110 (e.g., the processing module 430) may perform the first processing operation on the at least one first object that satisfies the first preset condition.
Operation 1107 may be performed in a similar manner as operation 750 as described in connection with FIG. 7A, and the descriptions thereof are not repeated here.
After performing the first processing operation on the at least one first object that satisfies the first preset condition, the first processing device 110 may transmit the processing result of the first processing operation to the second processing device 160.
In 1108, in response to determining that the count of the at least one first object exceeds the computing capacity of the first processing device 110, the first processing device 110 (e.g., the processing module 430) may perform the first processing operation on at least part of the at least one first object that satisfies the first preset condition.
Operation 1108 may be performed in a similar manner as operation 750 as described in connection with FIG. 7A, and the descriptions thereof are not repeated here.
After performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition, the first processing device 110 may transmit a processing result of the first processing operation to a second processing device 160.
In 1109, the second processing device 160 (e.g., the processing module 540) may determine at least one unprocessed object that satisfies the first preset condition based on the processing result of the first processing operation.
Operation 1109 may be performed in a similar manner as operation 640 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
In 1110, the second processing device 160 (e.g., the processing module 540) may perform a second processing operation on the at least one unprocessed object that satisfies the first preset condition.
Operation 1110 may be performed in a similar manner as operation 640 as described in connection with FIG. 6, and the descriptions thereof are not repeated here.
In some embodiments, if the second processing device 160 determines that there is at least one unprocessed object that satisfies the first preset condition, it may indicate that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device 110. In order to avoid that the first processing device 110 does not identify all the objects in the original image, the second processing device 160 (e.g., the identification module 520) may re-identify object (s) in the original image. In response to determining that there are one or more new objects identified in the original image by the second processing device 160, the second processing device 160 may perform the second processing operation on at least part of the one or more new objects identified in the original image. In some embodiments, the second processing device 160 may select, from the one or more new objects, at least one new object that satisfies the first preset condition, and perform the second processing operation on the at least one new object that satisfies the first preset condition. In some embodiments, the second processing device 160 may perform the second processing operation on all the one or more new objects in the original image.
In some embodiments, a parameter matching operation may be performed on the first processing device 110 and the second processing device 160, such that the first preset condition stored in the first processing device 110 and the first preset condition stored in the second processing device 160 are the same.
The methods and systems for image processing disclosed in the present disclosure may be applied in a monitoring system for a shopping mall or a plaza. In these scenarios, a large number of objects (e.g., human faces) in an original image captured by a capture device of the monitoring system may need to be processed (e.g., masked) , and a count of the objects may exceed the computing capacity of a first processing device 110 of the capture device.
Generally, a plurality of objects identified in an image captured by a capture device may be processed (e.g., masked) based on an order of object identification time or positions of the plurality of objects in the image. For example, a first object may first be identified in the original image, a second object may then be identified in the original image, and a third object may further be identified in the original image, the first processing device 110 may first process the first object, then process the second object, further process the third object. When a count of identified objects exceeds a computing capacity of a processing device of the capture device, subsequent identified object (s) may not be processed due to the limited computing capacity of the processing device of the capture device. As another example, the plurality of objects in the image may be processed from a top to a bottom (or from a left side to a right side) of the image.  When a count of processed objects exceeds the computing capacity of the processing device of the capture device, object (s) in at least one area of the image may not be processed due to the limited computing capacity of the processing device of the capture device.
According to some embodiments of the present disclosure, due to a limited computing capacity of the first processing device 110, when a computing capacity required to process all identified object (s) in the original image exceeds a computing capacity of the first processing device 110, the first processing device 110 may perform the first processing operation on a part of the identified object (s) in the original image, and the second processing device 160 may perform the second processing operation on the other part of the identified object (s) in the original image, which can improve the efficiency of image processing, and ensure the effective protection of sensitive and private information associated with the original image.
It should be noted that the above description is merely provided for the purpose of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure.
Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure and are within the spirit and scope of the exemplary embodiments of this disclosure.
Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment, ” “an embodiment, ” and/or “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure.
Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be  referred to herein as a “unit, ” “module, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present disclosure may be written in a combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the “C” programming language, Visual Basic, Fortran 2103, Perl, COBOL 2102, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages. The program code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations, therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution, for example, an installation on an existing server or mobile device.
Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped in a single embodiment, figure,  or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various inventive embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, inventive embodiments lie in less than all features of a single foregoing disclosed embodiment.
In some embodiments, the numbers expressing quantities or properties used to describe and claim certain embodiments of the application are to be understood as being modified in some instances by the term “about, ” “approximate, ” or “substantially. ” For example, “about, ” “approximate, ” or “substantially” may indicate ±20%variation of the value it describes, unless otherwise stated. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable.
Each of the patents, patent applications, publications of patent applications, and other material, such as articles, books, specifications, publications, documents, things, and/or the like, referenced herein is hereby incorporated herein by this reference in its entirety for all purposes, excepting any prosecution file history associated with same, any of same that is inconsistent with or in conflict with the present document, or any of same that may have a limiting affect as to the broadest scope of the claims now or later associated with the present document. By way of example, should there be any inconsistency or conflict between the description, definition, and/or the use of a term associated with any of the incorporated material and that associated with the present document, the description, definition, and/or the use of the term in the present document shall prevail.
In closing, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the application. Other modifications that may be employed may be within the scope of the application. Thus, by way of example, but not of limitation, alternative configurations of the embodiments of the application may be utilized in accordance with the teachings herein. Accordingly, embodiments of the present application are not limited to that precisely as shown and described.

Claims (20)

  1. A system for image processing, comprising:
    at least one storage medium including a set of instructions; and
    at least one processor in communication with the at least one storage medium, wherein when executing the set of instructions, the at least one processor is directed to cause the system to perform operations including:
    obtaining an original image captured by a capture device;
    identifying one or more objects in the original image by a first processing device;
    performing a first processing operation on at least a first part of the one or more objects by the first processing device; and
    performing a second processing operation on at least a second part of the one or more objects by a second processing device.
  2. The system of claim 1, wherein the performing a first processing operation on at least a first part of the one or more objects by the first processing device includes:
    determining whether a count of the one or more objects in the original image exceeds a computing capacity of the first processing device;
    in response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device, selecting, from the one or more objects, at least one first object that satisfies a first preset condition; and
    performing the first processing operation on at least part of the at least one first object that satisfies the first preset condition.
  3. The system of claim 2, wherein the performing the first processing operation on the at least one first object that satisfies the first preset condition includes:
    extracting at least one feature of each of the at least one first object that satisfies the first preset condition; and
    performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the at least one feature.
  4. The system of claim 3, wherein the at least one feature of each of the at least one first object includes at least one of an angle of the first object, an image quality of the first object, or a distance between the first object and the capture device.
  5. The system of claim 3, wherein the at least one feature of each of the at least one first object includes a type of the first object.
  6. The system of claim 3, wherein the performing the first processing operation on the at  least part of the at least one first object that satisfies the first preset condition based on the at least one feature includes:
    performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on at least one priority corresponding to the at least one feature respectively.
  7. The system of claim 3, wherein the performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the at least one feature includes:
    determining at least one weight corresponding to at least one feature respectively;
    determining a weighted result based on the at least one weight corresponding to the at least one feature respectively; and
    performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the weighted result.
  8. The system of claim 2, wherein the performing a second processing operation on at least a second part of the one or more objects by a second processing device includes:
    determining whether a difference between the count of the one or more objects in the original image and a count of processed objects is greater than a threshold based on a processing result of the first processing operation;
    in response to determining that the difference between the count of the one or more objects in the original image and the count of the processed objects is greater than the threshold, selecting, form the one or more objects, at least one second object that satisfies a second preset condition; and
    performing the second processing operation on the at least one second object that satisfies the second preset condition.
  9. The system of claim 8, wherein the first preset condition is different from the second preset condition.
  10. The system of claim 1, wherein the performing a second processing operation on at least a second part of the one or more objects by a second processing device includes:
    determining at least one unprocessed object that satisfies the first preset condition based on a processing result of the first processing operation; and
    performing the second processing operation on the at least one unprocessed object that satisfies the first preset condition.
  11. The system of claim 1, wherein the first processing device is a processing device of the  capture device, and the second processing device is a back-end processing device.
  12. The system of claim 2, wherein the operations further include:
    obtaining a second image that is adjacently captured after the original image;
    identifying one or more objects in the second image by the first processing device;
    determining whether there is a static repeating object in the second image based on the one or more objects in the original image and the one or more objects in the second image; and
    in response to determining that there is a static repeating object in the second image, determining that the static repeating object satisfies the first preset condition.
  13. A method for image processing, comprising:
    obtaining an original image captured by a capture device;
    identifying one or more objects in the original image by a first processing device;
    performing a first processing operation on at least a first part of the one or more objects by the first processing device; and
    performing a second processing operation on at least a second part of the one or more objects by a second processing device.
  14. The method of claim 13, wherein the performing a first processing operation on at least a first part of the one or more objects by the first processing device includes:
    determining whether a count of the one or more objects in the original image exceeds a computing capacity of the first processing device;
    in response to determining that the count of the one or more objects in the original image exceeds the computing capacity of the first processing device, selecting, from the one or more objects, at least one first object that satisfies a first preset condition; and
    performing the first processing operation on at least part of the at least one first object that satisfies the first preset condition.
  15. The method of claim 14, wherein the performing the first processing operation on the at least one first object that satisfies the first preset condition includes:
    extracting at least one feature of each of the at least one first object that satisfies the first preset condition; and
    performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the at least one feature.
  16. The method of claim 15, wherein the at least one feature of each of the at least one first object includes at least one of an angle of the first object, an image quality of the first object, or a distance between the first object and the capture device.
  17. The method of claim 15, wherein the at least one feature of each of the at least one first object includes a type of the first object.
  18. The method of claim 15, wherein the performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the at least one feature includes:
    performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on at least one priority corresponding to the at least one feature respectively.
  19. The method of claim 15, wherein the performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the at least one feature includes:
    determining at least one weight corresponding to at least one feature respectively;
    determining a weighted result based on the at least one weight corresponding to the at least one feature respectively; and
    performing the first processing operation on the at least part of the at least one first object that satisfies the first preset condition based on the weighted result.
  20. A non-transitory computer readable medium, comprising executable instructions that, when executed by at least one processor, direct the at least one processor to perform a method, the method comprising:
    obtaining an original image captured by a capture device;
    identifying one or more objects in the original image by a first processing device;
    performing a first processing operation on at least a first part of the one or more objects by the first processing device; and
    performing a second processing operation on at least a second part of the one or more objects by a second processing device.
PCT/CN2023/083479 2022-03-30 2023-03-23 Systems and methods for image processing WO2023185646A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210326603.5 2022-03-30
CN202210326603.5A CN114419720B (en) 2022-03-30 2022-03-30 Image occlusion method and system and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2023185646A1 true WO2023185646A1 (en) 2023-10-05

Family

ID=81263599

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/083479 WO2023185646A1 (en) 2022-03-30 2023-03-23 Systems and methods for image processing

Country Status (2)

Country Link
CN (1) CN114419720B (en)
WO (1) WO2023185646A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419720B (en) * 2022-03-30 2022-10-18 浙江大华技术股份有限公司 Image occlusion method and system and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108206930A (en) * 2016-12-16 2018-06-26 杭州海康威视数字技术股份有限公司 The method and device for showing image is covered based on privacy
CN109167912A (en) * 2018-09-05 2019-01-08 北京图示科技发展有限公司 A kind of image processing method and device of intelligent glasses
CN109886864A (en) * 2017-12-06 2019-06-14 杭州海康威视数字技术股份有限公司 Privacy covers processing method and processing device
US20200293767A1 (en) * 2018-03-09 2020-09-17 Hanwha Techwin Co., Ltd. Method and apparatus for performing privacy asking by reflecting characteristic information of objects
CN112468823A (en) * 2020-11-10 2021-03-09 浙江大华技术股份有限公司 Privacy shielding method and device based on simulation video recording device and storage medium
CN114025173A (en) * 2021-11-17 2022-02-08 浙江大华技术股份有限公司 Image processing method, terminal and computer readable storage medium
CN114419720A (en) * 2022-03-30 2022-04-29 浙江大华技术股份有限公司 Image occlusion method and system and computer readable storage medium

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540894A (en) * 2009-04-10 2009-09-23 浙江工业大学 Shadowing method of privacy area in video surveillance
CN101945259B (en) * 2010-09-13 2013-03-13 珠海全志科技股份有限公司 Device and method for superimposing and keeping out video images
CN103179378A (en) * 2011-12-26 2013-06-26 天津市亚安科技股份有限公司 Video monitoring device with privacy sheltering function and privacy sheltering method
CN103890783B (en) * 2012-10-11 2017-02-22 华为技术有限公司 Method, apparatus and system for implementing video occlusion
CN105100545A (en) * 2014-05-23 2015-11-25 三亚中兴软件有限责任公司 Method for adjusting brightness of frame, multipoint control unit (MCU) and terminal
US9876964B2 (en) * 2014-05-29 2018-01-23 Apple Inc. Video coding with composition and quality adaptation based on depth derivations
CN106358069A (en) * 2016-10-31 2017-01-25 维沃移动通信有限公司 Video data processing method and mobile terminal
CN106375737B (en) * 2016-11-25 2020-08-28 浙江宇视科技有限公司 Method and device for local shielding of video image
CN107945103A (en) * 2017-11-14 2018-04-20 上海歌尔泰克机器人有限公司 The privacy screen method, apparatus and unmanned plane of unmanned plane image
KR101881391B1 (en) * 2018-03-09 2018-07-25 한화에어로스페이스 주식회사 Apparatus for performing privacy masking by reflecting characteristic information of objects
CN111343130A (en) * 2018-12-19 2020-06-26 中国移动通信集团辽宁有限公司 Privacy protection method, device and equipment
CN111586361B (en) * 2020-05-19 2021-10-15 浙江大华技术股份有限公司 Image processing method and related device
CN112950443B (en) * 2021-02-05 2023-11-24 深圳市镜玩科技有限公司 Self-adaptive privacy protection method, system, equipment and medium based on image sticker
CN113658219A (en) * 2021-07-22 2021-11-16 浙江大华技术股份有限公司 High-altitude parabolic detection method, device and system, electronic device and storage medium
CN114048489B (en) * 2021-09-01 2022-11-18 广东智媒云图科技股份有限公司 Human body attribute data processing method and device based on privacy protection
CN114143503A (en) * 2021-10-20 2022-03-04 浙江大华技术股份有限公司 Video occlusion method and device, computer equipment and readable storage medium
CN114119647A (en) * 2021-12-07 2022-03-01 航天科技控股集团股份有限公司 Privacy camera system based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108206930A (en) * 2016-12-16 2018-06-26 杭州海康威视数字技术股份有限公司 The method and device for showing image is covered based on privacy
CN109886864A (en) * 2017-12-06 2019-06-14 杭州海康威视数字技术股份有限公司 Privacy covers processing method and processing device
US20200293767A1 (en) * 2018-03-09 2020-09-17 Hanwha Techwin Co., Ltd. Method and apparatus for performing privacy asking by reflecting characteristic information of objects
CN109167912A (en) * 2018-09-05 2019-01-08 北京图示科技发展有限公司 A kind of image processing method and device of intelligent glasses
CN112468823A (en) * 2020-11-10 2021-03-09 浙江大华技术股份有限公司 Privacy shielding method and device based on simulation video recording device and storage medium
CN114025173A (en) * 2021-11-17 2022-02-08 浙江大华技术股份有限公司 Image processing method, terminal and computer readable storage medium
CN114419720A (en) * 2022-03-30 2022-04-29 浙江大华技术股份有限公司 Image occlusion method and system and computer readable storage medium

Also Published As

Publication number Publication date
CN114419720B (en) 2022-10-18
CN114419720A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
KR102319177B1 (en) Method and apparatus, equipment, and storage medium for determining object pose in an image
US10635890B2 (en) Facial recognition method and apparatus, electronic device, and storage medium
US9418283B1 (en) Image processing using multiple aspect ratios
JP7165742B2 (en) LIFE DETECTION METHOD AND DEVICE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
WO2022161286A1 (en) Image detection method, model training method, device, medium, and program product
CN108229324B (en) Gesture tracking method and device, electronic equipment and computer storage medium
CN105893920B (en) Face living body detection method and device
US7912253B2 (en) Object recognition method and apparatus therefor
US11676390B2 (en) Machine-learning model, methods and systems for removal of unwanted people from photographs
CN106709404B (en) Image processing apparatus and image processing method
CN110956122B (en) Image processing method and device, processor, electronic device and storage medium
WO2019061658A1 (en) Method and device for positioning eyeglass, and storage medium
WO2019033569A1 (en) Eyeball movement analysis method, device and storage medium
CN111626163B (en) Human face living body detection method and device and computer equipment
CN110569756A (en) face recognition model construction method, recognition method, device and storage medium
CN110717497B (en) Image similarity matching method, device and computer readable storage medium
CN108491872B (en) Object re-recognition method and apparatus, electronic device, program, and storage medium
Tamilselvi et al. An ingenious face recognition system based on HRPSM_CNN under unrestrained environmental condition
EP3783524A1 (en) Authentication method and apparatus, and electronic device, computer program, and storage medium
CN112200115B (en) Face recognition training method, recognition method, device, equipment and storage medium
WO2023185646A1 (en) Systems and methods for image processing
Hebbale et al. Real time COVID-19 facemask detection using deep learning
CN110232381B (en) License plate segmentation method, license plate segmentation device, computer equipment and computer readable storage medium
KR101174103B1 (en) A face recognition method of Mathematics pattern analysis for muscloskeletal in basics
US11605220B2 (en) Systems and methods for video surveillance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23778015

Country of ref document: EP

Kind code of ref document: A1