The content of the invention
The present invention is proposed in view of the above problems.The invention provides a kind of realize using using field programmable gate array
In the image processing method and image processing equipment of the neural network algorithm of image procossing.
According to one embodiment of the disclosure, there is provided a kind of image processing method, including:Obtain by image acquisition unit
The raw image data of collection;The first image procossing is performed to the raw image data by the first graphics processing unit, to obtain
Take the first view data;Described first image data are based on by field programmable gate array unit, described first image number is determined
The testing result of the target according in;And by described first image processing unit based on described first image data and the target
Testing result, generate corresponding to the target coded image data.
Additionally, the picture processing method of the one embodiment according to the disclosure, wherein, by described first image processing unit base
In described first image data and the testing result of the target, generation includes corresponding to the view data of the target:It is based on
The detection knot of the target in each frame image data and each frame image data in described first image data
Really, each frame image data described in cutting, generates only including the view data of the target;And only include the mesh to described
Target view data performs first and encodes, and generates the coded image data corresponding to the target.
Additionally, the image processing method of the one embodiment according to the disclosure, wherein, by described first image processing unit
Based on described first image data and the testing result of the target, generation also includes corresponding to the view data of the target:
Based on described first image data and the described first image data including sequential frame image including sequential frame image
In the target in each frame image data testing result, the mark described first image number including sequential frame image
Target according in each frame image data;And to being labeled with described in the target including described the first of sequential frame image
View data performs second and encodes, and generates coded image data of the successive frame corresponding to the target.
Additionally, the image processing method of the one embodiment according to the disclosure, wherein, described image processing method is also wrapped
Include:Described first image data are transferred to into the scene from described first image processing unit via first interface unit can compile
Journey gate array unit;And via second interface unit by the testing result of the target from the field programmable gate array list
Unit is transferred to described first image processing unit.
Additionally, the image processing method of the one embodiment according to the disclosure, wherein, described first image processing unit bag
Include cutting subelement and the first coded sub-units, the cutting subelement and first coded sub-units are hardware, the sanction
The step of cutting is realized that first coding is realized by first coded sub-units by the cutting subelement.
Additionally, the image processing method of the one embodiment according to the disclosure, wherein, described first image processing unit bag
The second coded sub-units are included, second coded sub-units are hardware, second coding is by the second coded sub-units reality
It is existing.
Additionally, the image processing method of the one embodiment according to the disclosure, wherein, described image processing method is also wrapped
Include:The coded image data of the target is processed by network transmission to back-end server.
Additionally, the image processing method of the one embodiment according to the disclosure, wherein, the target includes face, described
The process that back-end server is carried out is included in face character analysis, recognition of face, face U.S. face, human face cartoon at least
It is a kind of.
Additionally, the image processing method of the one embodiment according to the disclosure, wherein, the acquisition is by image acquisition unit
It is the step of raw image data of collection, described that the first image is performed to the raw image data by the first graphics processing unit
The step of processing to obtain the first view data, it is described true based on described first image data by field programmable gate array unit
The step of determining the testing result of target in described first image data, and described be based on by described first image processing unit
The step of testing result of described first image data and the target generates the coded image data corresponding to the target by
Camera realizes that the camera can be compiled including described image acquiring unit, described first image processing unit and the scene
Journey gate array unit.
According to another embodiment of the disclosure, there is provided a kind of image processing equipment, including:Image acquisition unit, uses
In collection raw image data;First graphics processing unit, for performing the first image procossing to the raw image data, with
Obtain the first view data;And field programmable gate array unit, for based on described first image data, determining described the
The testing result of the target in one view data;Wherein, described first image processing unit is additionally operable to based on described first image
The testing result of data and the target, generates the coded image data corresponding to the target.
Additionally, according to the image processing equipment of another embodiment of the disclosure, wherein described first image processing unit
Also include:Cutting subelement, for based on each frame image data and each frame figure in described first image data
As the testing result of the target in data, each frame image data described in cutting, generate only including the image of the target
Data;And first coded sub-units, for only performing first and encoding including the view data of the target to described, it is right to generate
The coded image data of target described in Ying Yu.
Additionally, according to the image processing equipment of another embodiment of the disclosure, wherein described first image processing unit
Also include:Mark subelement, for based on the described first image data and the sequential frame image for including sequential frame image
Described first image data in the target in each frame image data testing result, mark is described to include successive frame figure
Target in the described first image data of picture in each frame image data;And second coded sub-units, for being labeled with
The described first image data including sequential frame image of the target perform second and encode, and generate successive frame and correspond to institute
State the coded image data of target.
Additionally, according to the image processing equipment of another embodiment of the disclosure, also including:First interface unit, is used for
Described first image data are transferred to into the FPGA unit from described first image processing unit;And second interface unit,
For the testing result of the target to be transferred to into described first image processing unit from the field programmable gate array unit.
Additionally, according to the image processing equipment of another embodiment of the disclosure, wherein, the cutting subelement and described
First coded sub-units are realized by hardware.
Additionally, according to the image processing equipment of another embodiment of the disclosure, wherein, the mark subelement is by software
Realize either being realized by hardware or being realized by combination of hardware software, second coded sub-units are realized by hardware.
Additionally, according to the image processing equipment of another embodiment of the disclosure, wherein, described image processing equipment is also wrapped
Data transmission unit is included, for the coded image data of the target to be processed by network transmission to back-end server.
Additionally, according to the image processing equipment of another embodiment of the disclosure, wherein, the target includes face, institute
The process that stating back-end server is carried out is included in face character analysis, recognition of face, face U.S. face, human face cartoon extremely
Few one kind.
Additionally, according to the image processing equipment of another embodiment of the disclosure, wherein, described image processing equipment is to take the photograph
As head, described image acquiring unit includes optical pickocff, and described first image processing unit includes image signal processing (ISP)
Module and CPU (CPU) module.
Image processing method and image processing equipment in accordance with an embodiment of the present disclosure, by using field programmable gate array
The neural network algorithm for target detection is realized as coprocessor so that a large amount of convolution algorithms that neutral net needs are used
The computation capability of field programmable gate array realizes that sufficient parallelization is processed and causes computational efficiency to compare single core processor
Calculating is greatly improved;Additionally, video stream encryption is carried out by using specialised hardware, while large-scale calculating is put into
Carry out on field programmable gate array, alleviate the load of host processor chip, effectively reduce the possibility of interim card;Realize front end
The picture of the face frame coding of fixed size is quickly only transmitted after Face datection, rather than transmits whole picture, reduced
Transmission bandwidth, while facilitating background server computing;Further, due between field programmable gate array and host processor chip
The degree of coupling is less, can develop respectively, realizes development efficiency and lifts and be easy to optimization.
It is understood that foregoing general description and detailed description below are both exemplary, and it is intended to
In further illustrating for the claimed technology of offer.
Specific embodiment
In order that the purpose of the disclosure, technical scheme and advantage become apparent from, root is described in detail below with reference to accompanying drawings
According to the example embodiment of the disclosure.Obviously, described embodiment is only a part of this disclosure embodiment, rather than this public affairs
The whole embodiments opened, it should be appreciated that the disclosure is not limited by example embodiment described herein.Described in the disclosure
The embodiment of the present disclosure, those skilled in the art's all other embodiment resulting in the case where creative work is not paid
All should fall within the protection domain of the disclosure.Hereinafter, each embodiment of the disclosure will be described in detail with reference to the attached drawings.
First, see figures.1.and.2 image processing equipment and its image processing method of the general introduction according to the embodiment of the present disclosure.
Fig. 1 is the block diagram for illustrating the image processing equipment according to the embodiment of the present disclosure.Image processing equipment as shown in Figure 1
The camera of 10 video monitorings that for example could be for execution special scenes.Or, image processing equipment 10 as shown in Figure 1
Can be independent image processing apparatus, such as image processing server.Or, image processing equipment 10 as shown in Figure 1 can
To be configured in the identification of video data performance objective and figure for providing the camera from the video monitoring for performing special scenes
As in the server of process.
Specifically, included at image acquisition unit 101, the first image according to the image processing equipment 10 of the embodiment of the present disclosure
Reason unit 102 and field programmable gate array unit 103.It is easily understood that being set according to the image procossing of the embodiment of the present disclosure
Standby 10 not limited to this, and can be including other units such as data transmission unit, interface unit.
Image acquisition unit 101 is used to gather raw image data.In one embodiment of the disclosure, image obtains single
Unit 101 is configured by the imageing sensor of such as charge-coupled image sensor (CCD) or complementary mos device (CMOS).
Image acquisition unit 101 can be with the physics such as the first graphics processing unit 102 thereafter and field programmable gate array unit 103
Distribution is separated on position, and via wired or wireless mode, from image acquisition unit 101 the original image number of collection is sent
According to each unit thereafter.Alternately, image acquisition unit 101 can be with other unit physics in image processing equipment 10
Upper co-located even on same casing internal, other units in image processing equipment 10 are received via internal bus
From the view data that image acquisition unit 101 sends.
First graphics processing unit 102 is used to perform the first image procossing to raw image data, to obtain the first image
Data.In one embodiment of the disclosure, the first graphics processing unit 102 includes that such as picture signal processes the place of (ISP)
Reason unit.First graphics processing unit 102 pairs performs predetermined image from the raw image data that image acquisition unit 101 sends
Process operation (such as image noise reduction enhancing process, white balance etc.) to obtain the first view data.First graphics processing unit 102
The first view data for obtaining is sent to into field programmable gate array unit 103.
Field programmable gate array unit 103 is used to be based on the first view data, determines the target in the first view data
Testing result.In each embodiment of the disclosure, the target in the first view data can be various for face, pedestrian, vehicle etc.
Object, here is not defined.In certain embodiments, illustrate so that the target for detecting is face as an example.In the disclosure
One embodiment in, determine that target in the first view data includes determining that the first view data includes used as target
Pedestrian or face.Field programmable gate array unit 103 is configured by field programmable gate array (FPGA).Field programmable gate
Used as a kind of general-purpose chip, it realizes parallel computation to array by way of by Algorithm mapping to hardware.For example, can using scene
Programming gate array realizes low bit convolutional neural networks (BCNN) algorithm, wherein by using in field programmable gate array
The 10K even look-up table (LUT) of the 100K orders of magnitude as convolutional calculation module, to make full use of field programmable gate array
Resource is improving computing power.Simultaneously by the convolutional calculation module of look-up tables'implementation increased can static configuration port number, data
Bit wide number and can dynamic configuration cycle-index and mode, coordinate adjustable parameters control module (for example, field-programmable
The arm processor of gate array), the CNN models of different frameworks can be quickly realized, wherein each module is compiled using hardware language
Write, optimization can be started from the hardware of the bottom so when writing, reduce redundancy.Scene in each embodiment of the disclosure can
Programming gate array (FPGA), in addition to including FPGA module, can also include other modules (such as arm processor etc.), and here is simultaneously
It is not defined.
In the convolutional neural networks realized by field programmable gate array unit 103 for the first view data, it is determined that
After the testing result of the target in the first view data, field programmable gate array unit 103 carries the testing result of target
Supply the first graphics processing unit 102.First graphics processing unit 102 is based on the testing result of the first view data and target,
Generate the coded image data corresponding to target.As hereinafter with reference to Description of Drawings, the first graphics processing unit 102 can give birth to
Into the coded image data for only including target, (that is, the coded image data corresponding to target of generation is the number of picture format
According to), it is also possible to generate coded image data (that is, the coded image number corresponding to target of generation of the successive frame corresponding to target
According to the data for video format).Wherein, the generation of the first graphics processing unit 102 is only including the coded image data of target
The data of picture format, for the view data of each frame first, generating consistent with the destination number for detecting only includes target
Coded image data, each only includes a target including the coded image data of target.For example, in the figure of a certain frame first
As the target in data is two faces, then two are generated only including the coded image data (i.e. two images) of target, each
Coded image data includes a face.Wherein, coded image data of the successive frame corresponding to target is generated, generation is referred to
Coded image data is the data of the video format that mark target corresponding with the successive frame in the first view data.
It is camera (for example capturing camera) according to the image processing equipment of the disclosure in a specific example, should
Image processing equipment can also include such as other assemblies such as camera lens, imageing sensor in addition to including said modules.By taking the photograph
As being realized corresponding to target using field programmable gate array and the first graphics processing unit and second processing unit in head
Coded image data, can improve the image-capable of camera itself, realize some image processing operations (such as face inspections
Survey, facial image intercepting etc.) locally complete in camera, realize that associated picture is processed relative to reliance server in prior art
Mode, the computing pressure of server can be mitigated.
Fig. 2 is the flow chart for illustrating the image processing method according to the embodiment of the present disclosure.Shown in Fig. 2 according to disclosure reality
The image processing method 20 for applying example is performed by image processing equipment 10 as shown in Figure 1.As shown in Figure 2 implements according to the disclosure
The image processing method 20 of example is comprised the following steps.
In step s 201, the raw image data gathered by image acquisition unit is obtained.One in the disclosure implements
In example, image acquisition unit is configured by the imageing sensor of such as charge-coupled image sensor or complementary mos device
The raw image data of 101 acquisition monitoring scenes.Hereafter, process and enter step S202.
In step S202, the first image procossing is performed to raw image data by the first graphics processing unit, to obtain
First view data.In one embodiment of the disclosure, the such as picture signal included by the first graphics processing unit 102
The processing unit of processor (ISP) performs predetermined image procossing behaviour from the raw image data that image acquisition unit 101 sends
Make (such as image noise reduction enhancing is processed) to obtain the first view data.Hereafter, process and enter step S203.
In step S203, the first view data is based on by field programmable gate array unit, determines the first view data
In target testing result.In one embodiment of the disclosure, the volume realized by field programmable gate array unit 103
Product neutral net performs low bit convolutional neural networks (BCNN) algorithm for the first view data, in determining the first view data
Target testing result.Hereafter, process and enter step S204.
In step S204, by the first graphics processing unit based on the first view data and the testing result of target, generate
Corresponding to the coded image data of target.In one embodiment of the disclosure, the first graphics processing unit 102 is based on the first figure
As data and the testing result of target, can generate only including each frame coded image data of target, it is also possible to generate continuous
Coded image data of the frame corresponding to target.
By referring to image processing equipment and its image processing method according to the embodiment of the present disclosure that Fig. 1 and Fig. 2 is described,
By using the neural network algorithm that field programmable gate array unit 103 is realized for target detection as coprocessor, make
A large amount of convolution algorithms that obtaining neutral net needs are realized using the computation capability of field programmable gate array unit 103, filled
The parallelization for dividing is processed so that computational efficiency is compared single core processor calculating and is greatly improved.By using specialised hardware
(that is, the first graphics processing unit 102) carries out video stream encryption, while large-scale calculating is put into into field programmable gate array
Carry out on unit 103, alleviate the load of host processor chip, effectively reduce the possibility of interim card.
Hereinafter, image processing equipment and its figure of Fig. 3 to Fig. 5 detailed descriptions according to the embodiment of the present disclosure will be referred to further
As processing method.
Fig. 3 is the block diagram for further illustrating the image processing equipment according to the embodiment of the present disclosure.At image as shown in Figure 3
Reason equipment 30 further includes first interface unit 104, second interface compared with the image processing equipment 10 with reference to Fig. 1 descriptions
Unit 105 and data transmission unit 106.Additionally, as shown in figure 3, the first graphics processing unit 102 specifically includes image procossing
Subelement 1021, cutting subelement 1022, the first coded sub-units 1023, the mark coded sub-units of subelement 1024 and second
1025.Additionally, image acquisition unit 101 as shown in Figure 3 and field programmable gate array unit 103 and reference Fig. 1 descriptions
Image acquisition unit 101 is identical with field programmable gate array unit 103, and here will omit its repeated description.
Specifically, the raw image data of collection is supplied to the first graphics processing unit 102 by image acquisition unit 101.
Image procossing subelement 1021 in first graphics processing unit 102 performs the first image procossing to raw image data, to obtain
Take the first view data.In one embodiment of the disclosure, image procossing subelement 1021 is by image-signal processor (ISP)
Configuration.1021 pairs of raw image datas sent from image acquisition unit 101 of image procossing subelement perform the first image procossing
(such as image noise reduction enhancing process etc.) is obtaining the first view data.
First interface unit 104 is used to for the first view data to be transferred to field-programmable from image procossing subelement 1021
Gate array unit 103.In one embodiment of the disclosure, first interface unit 104 is BT1120 interfaces.
Above with reference to described in Fig. 1, field programmable gate array unit 103 is based on via the reception of first interface unit 104
First view data, determines the testing result of the target in the first view data.
Second interface unit 105 is used to for the testing result of target to be transferred to the from field programmable gate array unit 103
One graphics processing unit 102.In one embodiment of the disclosure, second interface unit 105 connects for such as USB, SPI or LAN
Mouthful.
Above with reference to described in Fig. 1, the first graphics processing unit 102 is based on the first view data and via second interface unit
The testing result of 105 targets for receiving, generates the coded image data corresponding to target.
Specifically, cutting subelement 1022 is used for based on each frame image data in the first view data and via the
The testing result of the target in each frame image data that two interface unit 105 is received, it is each in the view data of cutting first
Frame image data, generates only including each frame image data of target.First coded sub-units 1023 are used for single from cutting
What unit 1022 received only performs the first coding including each frame image data of target, generates coding of each frame corresponding to target
View data.In one embodiment of the disclosure, the coded sub-units 1023 of cutting subelement 1022 and first are realized by hardware.
First coded sub-units 1023 are, for example, video encoder (VENC), and it performs all to each frame image data for only including target
Such as the first coding of Joint Photographic Experts Group (JPEG) coding, a frame is obtained only including the image of target (for example, face).
Mark subelement 1024 is used to be based on includes the first view data of sequential frame image and via second interface list
The testing result of the target in the first view data of the sequential frame image that unit 105 receives in each frame image data, mark bag
Include the target in each frame image data in the first view data of sequential frame image.Second coded sub-units 1025 are used for mark
The first view data including sequential frame image for being marked with the target performs the second coding, generates successive frame corresponding to target
Coded image data.In one embodiment of the disclosure, mark subelement 1024 by software realize or by hardware realize or
Person is realized that the hardware of the second coded sub-units 1025 is realized by combination of hardware software.Mark subelement 1024 is, for example, Video processing
Subsystem (VPSS), the testing result of the target (for example, face etc.) that VPSS is provided using field programmable gate array unit 103
Carry out the position of picture frame, i.e. label target.Second coded sub-units 1025 are, for example, video encoder (VENC), and it is for mark
The first view data including sequential frame image for having target performs the second coding such as H.264 or H.265, generates successive frame
Corresponding to the coded image data of target.
Data transmission unit 106 is used to (give birth to the coded image data of the target including the first coded sub-units 1023
Into only including the jpeg image and/or the second coded sub-units 1025 of target generate be labeled with target H.264 or H.265
Video) processed to back-end server (not shown) by network transmission.In the case where the target includes face, rear end
The process that server is carried out can include that face character analysis (such as Analysis of age, gender analysis etc.), recognition of face, face are beautiful
At least one in face, human face cartoon.
Fig. 4 and Fig. 5 are the flow charts for further illustrating the image processing method according to the embodiment of the present disclosure.Fig. 4 and Fig. 5 institutes
The image processing method 40 and 50 according to the embodiment of the present disclosure for showing is performed by image processing equipment 30 as shown in Figure 3.Such as Fig. 4
The shown image processing method 40 according to the embodiment of the present disclosure is comprised the following steps.
Step S401 and S402 shown in Fig. 4 is identical with S201 the step of description with reference to Fig. 2 and S202 respectively, and here will be saved
Slightly its repeated description.
After the first view data is obtained in step S402, process enters step S403.In step S403, by scene
Programmable gate array unit is based on the first view data, determines the testing result of the target in the first view data.Hereafter, process
Enter step S404.
In step s 404, based on the first view data in each frame image data and each frame image data in
The testing result of target, each frame image data of cutting is generated only including the view data of target.For each frame image data
In each object, generally correspondence cuts out piece image.In one embodiment of the disclosure, cutting subelement 1022 is based on
The testing result of the target in each frame image data and each frame image data in the first view data, each frame of cutting
View data, generates only including each frame image data of target.Hereafter, process and enter step S405.
In step S405, each frame image data to only including target performs first and encodes, and generates each frame correspondence
In the coded image data of target.In one embodiment of the disclosure, the first coded sub-units 1023 are used for from cutting
What unit 1022 was received only performs the first coding including each frame image data of target, generates volume of each frame corresponding to target
Code view data.As described above, the coded sub-units 1023 of cutting subelement 1022 and first are realized by hardware.First coding is single
Unit 1023 is, for example, video encoder (VENC), and it is special that it performs such as joint image to each frame image data for only including target
First coding of group of family (JPEG) coding, obtains a frame only including the image of target (for example, face).Hereafter, process is entered
Step S406.
In step S406, based on the first view data including sequential frame image and including the first of sequential frame image
The testing result of the target in view data in each frame image data, mark is included in the first view data of sequential frame image
Target in each frame image data.Wherein, the mode of mark can be the sign frame such as addition face frame, pedestrian's frame, it is also possible to
For other feasible modes, here is not defined.In one embodiment of the disclosure, mark subelement 1024 is based on bag
Include the mesh in each frame image data in the first view data of sequential frame image and the first view data of sequential frame image
Target testing result, marks the target in each frame image data in the first view data for including sequential frame image.Hereafter, locate
Reason enters step S407.
In step S 407, the first view data including sequential frame image to being labeled with target performs second and encodes,
Generate coded image data of the successive frame corresponding to target.In one embodiment of the disclosure, the second coded sub-units 1025
The first view data including sequential frame image to being labeled with the target performs second and encodes, and generates successive frame and corresponds to mesh
Target coded image data.As described above, mark subelement 1024 is realized either being realized by hardware by software or tied by hardware
Close software to realize, the hardware of the second coded sub-units 1025 is realized.Mark subelement 1024 is, for example, video processing subsystem
(VPSS), the testing result of the target (for example, face) that VPSS is provided using field programmable gate array unit 103 carries out picture
Frame, that is, mark the position of face frame.Second coded sub-units 1025 are, for example, video encoder (VENC), and it is for being labeled with mesh
Target includes that the first view data of sequential frame image performs the second coding such as H.264 or H.265, generates successive frame correspondence
In the coded image data of target.
It is to be appreciated that it is above-mentioned for perform first encode the step of S404 and S405 and for perform second encode
The step of S406 and S407 need not be performed with order as shown in Figure 4, and can be the order for selecting to perform as needed, or
Select execution step S404 and S405 and one of step S406 and S407.
Image processing method 50 according to the embodiment of the present disclosure as shown in Figure 5 is comprised the following steps.
Step S501 and S502 shown in Fig. 5 is identical with S201 the step of description with reference to Fig. 2 and S202 respectively, and here will be saved
Slightly its repeated description.
After the first view data is obtained in step S502, process enters step S503.In step S503, via
First view data is transferred to field programmable gate array unit by one interface unit from the first graphics processing unit.In the disclosure
One embodiment in, first view data is transferred to into scene from the first graphics processing unit 102 via BT1120 interfaces can
Programming gate array unit 103.Hereafter, process and enter step S504.
In step S504, the first view data is based on by field programmable gate array unit, determines the first view data
In target testing result.In one embodiment of the disclosure, the volume realized by field programmable gate array unit 103
Product neutral net performs low bit convolutional neural networks (BCNN) algorithm for the first view data, in determining the first view data
Target testing result.Hereafter, process and enter step S505.
In step S505, the testing result of target is passed from field programmable gate array unit via second interface unit
It is defeated to the first graphics processing unit.In one embodiment of the disclosure, via USB, SPI or LAN interface by the detection of target
As a result it is transmitted back to the first graphics processing unit 102 from field programmable gate array unit 103.Hereafter, process and enter step S506.
In step S506, by the first graphics processing unit based on the first view data and the testing result of target, generate
Corresponding to the coded image data of target.As described above, generation includes with reference in Fig. 4 corresponding to the coded image data of target
Step S404 and each frame of generation of S405 descriptions also include with reference to the step in Fig. 4 corresponding to the coded image data of target
Coded image data of the generation successive frame of S406 and S407 descriptions corresponding to target.Hereafter, process and enter step S507.
In step s 507, the coded image data of target is processed by network transmission to back-end server.
In one embodiment of the disclosure, the coded image data of the target (is generated only including the first coded sub-units 1023
H.264 or the H.265 video of target that what jpeg image including target and/or the second coded sub-units 1025 were generated be labeled with)
Processed to back-end server (not shown) by network transmission.In the case where the target includes face, back-end services
The process that device is carried out includes at least one in face character analysis, recognition of face, face U.S. face, human face cartoon.
The basis described according to the image processing equipment 30 of the embodiment of the present disclosure and with reference to Fig. 4 and Fig. 5 as shown in Figure 3
The image processing method 40 and 50 of the embodiment of the present disclosure, is to be configured with two from the view data of the collection of image acquisition unit 101
Data path:One is directly fed to field programmable gate array unit 103 and carries out the calculating of Face datection algorithm, then according to existing
The testing result that field programmable gate array unit 103 is returned intercepts the face part in video and carries out JPEG codings;Another
Based on the testing result of the return of field programmable gate array unit 103, after exports coding after the units such as VPSS, VENC
Video code flow.Thus, above the degree of coupling is low between two data paths, can be needed to select corresponding number according to actual application
Export according to path.Do not calculated in a large number inside image procossing subelement 1021 (ISP chips) simultaneously, ISP, coding, cutting etc.
Operation directly by hardware realization, reduces system loading.Only encode including the data path of the first coded sub-units 1023
The image of the less human face region of size, it is possible to achieve encode a upper thousand sheets face each second, and the face picture for exporting can be straight
Tap into row recognition of face calculating so that whole human face detection and tracing process is more convenient.
Hereinafter, will further describe according to the image processing equipment of the embodiment of the present disclosure and including the figure with reference to Fig. 6 and Fig. 7
As the example of the image processing system of processing equipment.
Fig. 6 is the block diagram for illustrating the camera according to the embodiment of the present disclosure.Camera 60 as indicated with 6 is to retouch with reference to Fig. 1
The image processing equipment 10 and the image procossing according to the embodiment of the present disclosure with reference to Fig. 3 descriptions according to the embodiment of the present disclosure stated
The specific example of equipment 30.
As shown in fig. 6, including optical pickocff 601, the first image procossing list according to the camera 60 of the embodiment of the present disclosure
Unit 602 and field programmable gate array 603.Optical pickocff 601 in camera 60 is corresponding to the figure illustrated in Fig. 1 and Fig. 3
As acquiring unit 101, for gathering raw image data.First graphics processing unit 602 is corresponding to diagram in Fig. 1 and Fig. 3
First graphics processing unit 102, and the first graphics processing unit 602 further includes image signal processing (ISP) module
6021 and CPU (CPU) module 6022, it performs first in the way of hardware and/or software to raw image data
Image procossing, and generate the coded image data corresponding to target.Field programmable gate array 603 is corresponding in Fig. 1 and Fig. 3
The field programmable gate array 103 of diagram, for based on the first view data, determining the detection of the target in the first view data
As a result.
Fig. 7 is the schematic diagram for illustrating the image processing system according to the embodiment of the present disclosure.Image procossing system as shown in Figure 7
System 7 includes camera 60 and server 70.Camera 60 obtains raw image data, and to raw image data the first image is performed
Process to obtain the first view data, and the first view data is based on using the field programmable gate array unit for wherein configuring
Determine the testing result of the target in the first view data, and the detection based on described first image data and the target is tied
Fruit generates the coded image data corresponding to the target.Camera 60 (wraps the coded image data of the final target for obtaining
Include the mark for only generating including the jpeg image and/or the second coded sub-units 1025 of target of the generation of the first coded sub-units 1023
It is marked with the H.264 or H.265 video of target) back-end server 70 is wire or wirelessly transferred to by network carries out such as people
At least one process in face attributive analysis, recognition of face, face U.S. face, human face cartoon.
More than, referring to figs. 1 to Fig. 7 describe in accordance with an embodiment of the present disclosure image processing method, image processing equipment with
And image processing system, wherein the nerve for target detection is realized as coprocessor by using field programmable gate array
Network algorithm so that a large amount of convolution algorithms that neutral net needs use the computation capability reality of field programmable gate array
Existing, sufficient parallelization is processed so that computational efficiency is compared single core processor calculating and is greatly improved;Additionally, by using
Specialised hardware carries out video stream encryption, while large-scale calculating is put on field programmable gate array carrying out, alleviates master
The load of processor chips, effectively reduces the possibility of interim card;Realize only being transmitted after quick Face datection and fix big in front end
The picture of little face frame coding, rather than whole picture is transmitted, transmission bandwidth is reduced, while facilitating background server to transport
Calculate;Further, due between field programmable gate array and host processor chip the degree of coupling it is less, can develop respectively,
Realize development efficiency and lift and be easy to optimization.
The general principle of the disclosure is described above in association with specific embodiment, however, it is desirable to, it is noted that in the disclosure
Advantage, advantage, effect for referring to etc. is only exemplary rather than limiting, it is impossible to think that these advantages, advantage, effect etc. are the disclosure
Each embodiment is prerequisite.In addition, detail disclosed above is merely to the effect of example and the work for readily appreciating
With, and it is unrestricted, above-mentioned details is not intended to limit the disclosure to realize using above-mentioned concrete details.
The device that is related in the disclosure, device, equipment, the block diagram of system only illustratively the example of property and are not intended to
The mode that requirement or hint must be illustrated according to square frame is attached, arranges, configures.As it would be recognized by those skilled in the art that
, can be connected, be arranged by any-mode, configure these devices, device, equipment, system.Such as " including ", "comprising", " tool
Have " etc. word be open vocabulary, refer to " including but not limited to ", and can be with its used interchangeably.Vocabulary used herein above
"or" and " and " refer to vocabulary "and/or", and can be with its used interchangeably, unless it be not such that context is explicitly indicated.Here made
Vocabulary " such as " refers to phrase " such as, but not limited to ", and can be with its used interchangeably.
In addition, as used herein, the "or" instruction used in the enumerating of the item started with " at least one " is detached
Enumerate, so that enumerating for such as " at least one of A, B or C " means A or B or C, or AB or AC or BC, or ABC (i.e. A and B
And C).Additionally, wording " example " does not mean that the example of description is preferred or more preferable than other examples.
It may also be noted that in the system and method for the disclosure, each part or each step can be to decompose and/or weigh
Combination nova.These decompose and/or reconfigure the equivalents that should be regarded as the disclosure.
Can carry out to the various of technology described herein without departing from the technology instructed defined by the appended claims
Change, replace and change.Additionally, the scope of the claim of the disclosure is not limited to process described above, machine, manufacture, thing
The specific aspect of the composition of part, means, method and action.Can utilize carry out to corresponding aspect described herein it is essentially identical
Function either realizes the there is currently of essentially identical result or process, machine, manufacture, the group of event to be developed after a while
Into, means, method or action.Thus, claims are included in such process, machine, manufacture, event in the range of it
Composition, means, method or action.
The above description of disclosed aspect is provided so that any person skilled in the art can make or using this
It is open.Various modifications in terms of these are readily apparent to those skilled in the art, and here is defined
General Principle can apply to other aspect without deviating from the scope of the present disclosure.Therefore, the disclosure is not intended to be limited to
Aspect shown in this, but according to the widest range consistent with the feature of principle disclosed herein and novelty.
In order to purpose of illustration and description has been presented for above description.Additionally, this description is not intended to the reality of the disclosure
Apply example and be restricted to form disclosed herein.Although already discussed above multiple exemplary aspects and embodiment, this area skill
Art personnel will be recognized that its some modification, modification, change, addition and sub-portfolio.