CN109479118A - Method for checking object, object test equipment and electronic equipment - Google Patents

Method for checking object, object test equipment and electronic equipment Download PDF

Info

Publication number
CN109479118A
CN109479118A CN201680087601.8A CN201680087601A CN109479118A CN 109479118 A CN109479118 A CN 109479118A CN 201680087601 A CN201680087601 A CN 201680087601A CN 109479118 A CN109479118 A CN 109479118A
Authority
CN
China
Prior art keywords
video image
image frame
area
interest
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201680087601.8A
Other languages
Chinese (zh)
Inventor
伍健荣
刘晓青
白向晖
谭志明
东明浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Publication of CN109479118A publication Critical patent/CN109479118A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present application provides a kind of object test equipment, method for checking object and electronic equipment, for detecting target object from video image frame, the object test equipment includes: extraction unit, its motion information based on video image frame, extracts area-of-interest from the video image frame;Detection unit carries out object detection according to the area-of-interest that the extraction unit is extracted in the video image frame.According to the application, the accuracy and speed of object detection are improved.

Description

Method for checking object, object test equipment and electronic equipment Technical field
This application involves information technology field, in particular to a kind of method for checking object based on video image, object test equipment and electronic equipment.
Background technique
With the development of information technology, applied more and more widely based on the object detection technique of image.For example, object detection can be carried out for video monitoring image in traffic monitoring field, thus the functions such as identification, tracking, control for identifying the objects such as specific vehicle, and realizing object in turn.
In the existing object detection technique based on video image, can whole image range to video image frame carry out object detection, in such manner, it is possible to avoid the blind area of detection, however, it is desirable to which the range of detection is larger, data processing amount when being detected is larger.
In the prior art, area-of-interest (Region of Interest can also be preset in video image frame, ROI), and, object detection is carried out for the area-of-interest of each video image frame, in such manner, it is possible to reduce data processing amount when being detected, detection speed is improved.
It should be noted that the above description of the technical background is intended merely to conveniently, clear and complete description of the technical solution of the present application, and facilitates the understanding of those skilled in the art and illustrate.It cannot be merely because these schemes be expounded in the background technology part of the application and think that above-mentioned technical proposal is known to those skilled in the art.
Apply for content
The inventors of the present application found that in the prior art, area-of-interest is preset, also, the position of the area-of-interest in each video image frame is all, unless resetting new area-of-interest.But in the scene of application detection, the object for needing to be detected would generally be moved, and when except it moving to area-of-interest, just be difficult to be detected, to cause missing inspection.
The embodiment of the present application provides a kind of method for checking object, object test equipment and electronic equipment, the object test equipment can extract area-of-interest based on the motion information of video image frame, and object detection is carried out according to the area-of-interest extracted, thus, it can be improved the accuracy of object detection, and improve detection speed.
According to the embodiment of the present application in a first aspect, provide a kind of object detection (object detection) device, For detecting target object from video image frame, which includes:
Extraction unit, the motion information based on video image frame extract area-of-interest from the video image frame;And
Detection unit carries out object detection according to the area-of-interest that the extraction unit is extracted in the video image frame.
According to the second aspect of the embodiment of the present application, a kind of method for checking object is provided, for detecting target object from video image frame, this method comprises:
Motion information based on video image frame extracts area-of-interest from the video image frame;And
According to the area-of-interest extracted, object detection is carried out in the video image frame.
According to the third aspect of the embodiment of the present application, a kind of electronic equipment is provided, including object test equipment described in above-described embodiment first aspect.
The beneficial effect of the embodiment of the present application is: in implementing according to the application, can be improved the accuracy of object detection, and improves detection speed.
Referring to following description and accompanying drawings, specific implementations of the present application are disclosed in detail, the principle for specifying the application can be in a manner of adopted.It should be understood that presently filed embodiment is not so limited in range.In the range of the spirit and terms of appended claims, presently filed embodiment includes many changes, modifications and is equal.
The feature for describing and/or showing for a kind of embodiment can be used in one or more other embodiments in a manner of same or similar, be combined with the feature in other embodiment, or the feature in substitution other embodiment.
It should be emphasized that term "comprises/comprising" refers to the presence of feature, one integral piece, step or component when using herein, but the presence or additional of one or more other features, one integral piece, step or component is not precluded.
Detailed description of the invention
Included attached drawing is used to provide that a further understanding of the embodiments of the present application, which constitute part of specification, for illustrating presently filed embodiment, and with verbal description comes together to illustrate the principle of the application.It should be evident that the drawings in the following description are only some examples of the present application, for those of ordinary skill in the art, without any creative labor, it is also possible to obtain other drawings based on these drawings.In the accompanying drawings:
Fig. 1 is a schematic diagram of the object test equipment of the embodiment of the present application 1;
Fig. 2 is a schematic diagram of the extraction unit of the embodiment of the present application 1;
Fig. 3 is a schematic diagram of the video image frame of the embodiment of the present application 1;
Fig. 4 is a schematic diagram of binaryzation moving image corresponding to the video image frame of Fig. 3;
Fig. 5 is the schematic diagram for carrying out connected area segmentation processing to the binaryzation moving image of Fig. 4 and generating after boundary rectangle;
Fig. 6 is the schematic diagram merged to connected domain of the embodiment of the present application 1;
Fig. 7 is another schematic diagram merged to connected domain of the embodiment of the present application 1;
Fig. 8 is a schematic diagram of the detection unit of the embodiment of the present application 1;
Fig. 9 is the schematic diagram merged to testing result of the embodiment of the present application 1;
Figure 10 is a work flow diagram of the detection unit of the embodiment of the present application 1;
Figure 11 is a flow diagram of the method for checking object of the embodiment of the present application 2;
Figure 12 is a schematic diagram of the method for extracting area-of-interest of the embodiment of the present application 2;
Figure 13 is a schematic diagram of the method for the carry out object detection of the embodiment of the present application 2;
Figure 14 is a composition schematic diagram of the electronic equipment of the embodiment of the present application 3.
Specific embodiment
Referring to attached drawing, by following specification, the aforementioned and other feature of the application be will be apparent.In the specification and illustrated in the drawings, specifically disclose specific implementations of the present application, which show wherein can be using some embodiments of the principle of the application, it will be appreciated that, the application is not limited to described embodiment, on the contrary, the application includes whole modifications, modification and the equivalent fallen within the scope of the appended claims.The various embodiments of the application are illustrated with reference to the accompanying drawing.These embodiments are only exemplary, and are not the limitations to the application.
Embodiment 1
The embodiment of the present application 1 provides a kind of object detection (object detection) device, for detecting target object from video image frame.
Fig. 1 is a schematic diagram of the object test equipment of the present embodiment 1, as shown in Figure 1, the detection device 100 includes extraction unit 101 and detection unit 102.
Wherein, motion information of the extraction unit 101 based on video image frame extracts area-of-interest from the video image frame;The area-of-interest that detection unit 102 is extracted according to extraction unit 101, in the video image frame Carry out object detection.
According to the present embodiment, object test equipment can extract area-of-interest based on the motion information of video image frame, and object detection is carried out according to the area-of-interest extracted, thus, corresponding area-of-interest can be more accurately extracted for each video image frame, to improve the accuracy of object detection, and improve detection speed.
In the present embodiment, video image frame for example can be the picture frame in video captured by monitor camera, and certainly, which can be from other devices, the present embodiment and without limitation.
Fig. 2 is a schematic diagram of the extraction unit 101 of the present embodiment, as shown in Fig. 2, extraction unit 101 includes motion detection unit 201, area division unit 202 and generation unit 203.
In the present embodiment, motion detection unit 201 is used to detect the motion information in video image frame;Area division unit 202 is used for the motion information according to detected by motion detection unit 201, marks off region occupied by each moving object in the video image frame;The region according to occupied by each moving object in the video image frame of generation unit 203, generates at least one area-of-interest, which covers region locating for each moving object in the video image frame.
In the present embodiment, motion detection unit 201 can carry out foreground detection to video image frame, to generate the binaryzation moving image (motion image) of the video image frame, according to the binaryzation moving image, the motion information of the video image frame can be obtained, for example, can reflect the motion information of the video image frame according to the first pixel in the binaryzation moving image, wherein, which for example can be white pixel.
Fig. 3 is a schematic diagram of video image frame, and Fig. 4 is a schematic diagram of binaryzation moving image corresponding to the video image frame of Fig. 3, and the white pixel in the binaryzation moving image 400 of Fig. 4 is capable of the motion information of reflecting video picture frame 300.
In the present embodiment, area division unit 202 can carry out connected area segmentation processing to binaryzation moving image, and to obtain at least one connected domain of pixel, which can correspond to region occupied by each moving object in video image frame.For example, all including multiple first pixels in each connected domain in binaryzation moving image, in the inside of each connected domain, the connection of the first pixel, between different connected domains, the first pixel is not connected to, thus different connected regions is isolated from each other.
In the present embodiment, area division unit 202 can also generate the external contact zone of the connected domain for each connected domain in binaryzation moving image, which can be used to indicate that the profile of each connected domain, the external contact zone for example can be rectangle etc..Fig. 5 is the schematic diagram for carrying out connected area segmentation processing to the binaryzation moving image of Fig. 4 and generating after boundary rectangle, as shown in figure 5, each boundary rectangle 501 respectively represents the profile of each connected domain, also, Each 501 area defined of boundary rectangle corresponds to region occupied by each moving object in video image frame 300.
In the present embodiment, area division unit 202 can also merge the connected domain that mutual distance is less than or equal to first threshold, using as a new connected domain.
In the present embodiment, which may be greater than 0 value, and the distance between connected domain can refer to the distance between the boundary of each connected domain, may also mean that the distance between geometric center or mass center of each connected domain etc.;And, in the case where each connected domain all has extraneous polygon, the distance between connected domain can refer to the distance between the boundary of each external contact zone even, it may also mean that the distance between geometric center or mass center of each external contact zone etc., wherein, if two connected domain external contact zones overlap each other, it is considered that the distance between two connected domains are negative value, it is less than the first threshold.
In the present embodiment, area division unit 202 or the new connected domain formed and merging at least two connected domains generate external contact zone.
Fig. 6 is the schematic diagram merged to connected domain, as shown in fig. 6, the boundary rectangle of two connected domains is respectively 6011 and 6012, also, boundary rectangle 6011 and 6012 partly overlaps in Figure 60 1 before merging.In Figure 60 2 after merging, two connected domains are merged into connected domain 6020, also, the boundary rectangle of connected domain 6020 is 6021, wherein boundary rectangle is 6021 boundary rectangles that can be boundary rectangle 6011 and 6012.
Fig. 7 is another schematic diagram merged to connected domain.As shown in fig. 7, the boundary rectangle of four connected domains is respectively 7011,7012,7013 and 7014 in Figure 70 1 before merging, also, this four boundary rectangles with the boundary of adjacent boundary rectangle at a distance from less than the first threshold.In the Figure 70 2 that merges that treated of area division unit 202, four connected domains are merged into connected domain 7020, also, the boundary rectangle of connected domain 7020 is 7021, wherein, boundary rectangle 7021 can be the boundary rectangle of boundary rectangle 7011,7012,7013 and 7014.
In addition, as shown in fig. 7, boundary rectangle 7016 at a distance from boundary rectangle 7011~7014 all farther out, such as, the distance is greater than the first threshold, and therefore, connected domain corresponding to boundary rectangle 7016 is not just merged with connected domain corresponding to boundary rectangle 7011~7014.
In the present embodiment, generation unit 203 being capable of the distance between the region according to occupied by each moving object in video image frame, at least one area-of-interest is generated, the region being closer as a result, is in the range of the same area-of-interest covers.
In the present embodiment, it the distance between region as occupied by each moving object in video image frame can be corresponding with the distance between connected domain in binaryzation moving image, therefore, generation unit 203 can be according to the distance between the connected domain in binaryzation moving image, to generate area-of-interest, for example, generation unit 203 can make distance It is covered less than or equal to the connected domain of second threshold by the same area-of-interest.
As shown in Figure 7, the distance between connected domain corresponding to boundary rectangle 7016 and connected domain 7020 are less than or equal to second threshold, therefore, connected domain corresponding to boundary rectangle 7016 is covered with connected domain 7020 by the same area-of-interest 703, wherein, the boundary 7031 of the area-of-interest 703 is identified with rectangle frame.Certainly, the present embodiment is without being limited thereto, and area-of-interest can also be identified with other modes, such as boundary 7031 can be other polygon frames.
In Fig. 7, the size on the boundary of the area-of-interest 703 can be greater than the size of the external contact zone of its each connected domain covered, such as, the size on the boundary 7031 of area-of-interest 703 can be greater than the size of the boundary rectangle of boundary rectangle 7016 and boundary rectangle 7021, for example, the former can be bigger than the latter by 10%.
In the present embodiment, generation unit 203 can be using corresponding region of the area-of-interest generated in binaryzation moving image in video image frame as the area-of-interest of the video image frame, extraction unit 101 can extract area-of-interest from video image frame as a result,.
The frame 301 of Fig. 3 shows the boundary of the area-of-interest extracted from video image frame 300 according to the extraction unit 101 of the application.
In the present embodiment, the area-of-interest that detection unit 102 can be extracted based on extraction unit 101, carries out object detection in video image frame.
Fig. 8 is a schematic diagram of detection unit 102, as shown in figure 8, detection unit 102 may include judging unit 801 and subject detecting unit 802.
In the present embodiment, judging unit 801 is used to judge whether the quantity of the area-of-interest in video image frame is less than or equal to third threshold value, and whether the area of area-of-interest is less than or equal to the 4th threshold value;Subject detecting unit 802 carries out object detection according to the judging result of judging unit 801 in the area-of-interest of video image frame or in the whole image range of video image frame.
In the present embodiment, if it is determined that unit 801 is judged as that the quantity of the area-of-interest in the video image frame is less than or equal to third threshold value, and, the area summation of area-of-interest is less than or equal to the 4th threshold value in the video image frame, so, subject detecting unit 802 carries out object detection in each area-of-interest of the video image frame, and thereby, it is possible to carry out quick object detection.
In addition, if judging unit 801 is judged as that the quantity of the area-of-interest in the video image frame is 0, then, which does not carry out object detection to the video image frame.
In the present embodiment, if it is determined that unit 801 is judged as that the quantity of the area-of-interest in the video image frame is big The area summation of area-of-interest is greater than the 4th threshold value in third threshold value or the video image frame, then, illustrate in video image frame, subject detecting unit 802 carries out object detection in the whole image range of the video image frame, and thereby, it is possible to prevent missing inspection.
In the present embodiment, the specific method that subject detecting unit 802 carries out object detection can refer to the prior art, and it will not be described for the present embodiment.
Furthermore, in the present embodiment, it can be using video image frame specific in video as key frame (key frame) and using video image frames other in video as normal frames (normal frame), wherein, using the video image frame for being spaced the predetermined time or the video image frame of predetermined number of frames can be spaced as key frame, in addition it is also possible to set key frame using other modes.Judging unit 801 may determine that whether the video image frame is normal frames, and for normal frames, object detection can be further determined in the area-of-interest of the normal frames or carried out in the whole image range of the normal frames according to the judging result of judging unit 801;And for key frame, can not have to judging unit 801 is further judged, and directly carries out object detection in the whole image range of the key frame.As a result, by carrying out the object detection in whole image range to key frame, missing inspection can be prevented.
In the present embodiment, detection unit 102 can also have combining unit 803.In the case where area-of-interest of the subject detecting unit 802 to current video image frame carries out object detection, combining unit 803 can be by the testing result in the area-of-interest of current video image frame, and it is merged for the testing result of the video image frame before current video image frame, testing result after the merging for example may include: to testing result except the testing result of the area-of-interest of current video image frame and the area-of-interest of current video image frame, to video image frame before.
Fig. 9 is the schematic diagram merged to testing result, 901 be current video image frame 902 before video image frame, 9011, 9012 be the target object detected in video image frame 901, the area-of-interest of current video image frame 902 is 9021, target object 9022 is detected in area-of-interest 9021, the testing result of current video image frame 902 and the testing result of video image frame 901 before merge, combined testing result 903, it include: the target object 9022 detected in area-of-interest 9021 in current video image frame 902 in the testing result 903 of the merging, and except the area-of-interest 9021 of current video image frame 902, in video image frame 9 before The target object 9012 detected in 01.
In the following, illustrating the workflow of detection unit 102 in conjunction with Figure 10.
Step 1001, judging unit 801 judge whether current video image frame is normal frames, if it is, proceeding to step 1002, if it is not, then proceeding to step 1005.
Step 1002, judging unit 801 judge whether the quantity of area-of-interest in current video image frame is less than or equal to third threshold value, if it is, proceeding to step 1003, if it is not, then proceeding to step 1005.
Step 1003, judging unit 801 judge whether the gross area of area-of-interest in current video image frame is less than or equal to the 4th threshold value, if it is, proceeding to step 1004, if it is not, then proceeding to step 1005.
Step 1004, subject detecting unit 802 carry out object detection in the area-of-interest of the video image frame.
Step 1005, subject detecting unit 802 carry out object detection in the whole image range of the video image frame.
Step 1006, combining unit 803 merge the testing result of the area-of-interest of current video image frame and the testing result of video image frame before.
According to the present embodiment, object test equipment can extract area-of-interest based on the motion information of video image frame, and object detection is carried out according to the area-of-interest extracted, thus, corresponding area-of-interest can be more accurately extracted for each video image frame, to improve the accuracy of object detection, and improve detection speed.
Embodiment 2
The embodiment of the present application also provides a kind of method for checking object, corresponding with the object test equipment of embodiment 1 for detecting target object from video image frame.
Figure 11 is a flow diagram of the method for checking object of the present embodiment 2, and as shown in figure 11, which may include:
Step 1101, the motion information based on video image frame, extract area-of-interest from the video image frame;And
Step 1102, according to the area-of-interest extracted, object detection is carried out in the video image frame.
Figure 12 is a schematic diagram of the method for extracting area-of-interest of the present embodiment 2, as shown in figure 12, this method comprises:
Motion information in step 1201, the detection video image frame;
Step 1202, according to detected motion information, mark off region occupied by each moving object in the video image frame;And
Step 1203, the region according to occupied by each moving object in the video image frame, generate at least one area-of-interest, at least one described area-of-interest covers region locating for each moving object in the video image frame.
In the step 1201 of the present embodiment, the binaryzation of the video image frame can be generated based on foreground detection Moving image (motion image), to obtain the motion information of the video image frame.
In the step 1202 of the present embodiment, connected area segmentation processing can be carried out to binaryzation moving image, to obtain at least one connected domain of pixel, at least one described connected domain corresponds to region occupied by each moving object in the video image frame.
In the step 1202 of the present embodiment, the external contact zone of each connected domain can also be generated.
In the step 1202 of the present embodiment, the connected domain that mutual distance is less than or equal to first threshold can also be merged into a new connected domain.
In the step 1203 of the present embodiment, at least one described area-of-interest can be generated according to the distance in the region.
Figure 13 is the present embodiment 2 according to the area-of-interest extracted, and a schematic diagram of the method for object detection is carried out in the video image frame, as shown in figure 13, this method comprises:
Step 1301 judges whether the quantity of the area-of-interest in the video image frame is less than or equal to third threshold value, and whether the area of the area-of-interest is less than or equal to the 4th threshold value;And
Step 1302, according to the judgement as a result, carrying out object detection in the area-of-interest of the video image frame or in the whole image range of the video image frame.
As shown in figure 13, this method further include:
Step 1303, to current video image frame area-of-interest carry out object detection in the case where, merged by the testing result in the area-of-interest of current video image frame, and for the testing result of the video image frame before current video image frame.
About above-mentioned each rapid detailed description, repeated explanation can be no longer carried out herein with the explanation in reference implementation example 1 for corresponding units.
According to the present embodiment, method for checking object can extract area-of-interest based on the motion information of video image frame, and object detection is carried out according to the area-of-interest extracted, thus, corresponding area-of-interest can be more accurately extracted for each video image frame, to improve the accuracy of object detection, and improve detection speed.
Embodiment 3
The embodiment of the present application 3 provides a kind of electronic equipment, including object test equipment as described in Example 1.
Figure 14 is a composition schematic diagram of the electronic equipment of the embodiment of the present application 3.As shown in figure 14, electronic equipment 1400 may include: central processing unit (CPU) 1401 and memory 1402;Memory 1402 is coupled to center Processor 1401.Wherein the memory 1402 can store various data;Additionally storage carries out the program of object detection, and the program is executed under the control of central processing unit 1401.
In one embodiment, the function in object test equipment can be integrated into central processing unit 1401.
Wherein, central processing unit 1401 can be configured as:
Motion information based on video image frame extracts area-of-interest from the video image frame;And
According to the area-of-interest extracted, object detection is carried out in the video image frame.
Central processing unit 1401 can be additionally configured to:
Detect the motion information in the video image frame;
According to detected motion information, region occupied by each moving object in the video image frame is marked off;And
According to region occupied by each moving object in the video image frame, at least one area-of-interest is generated, at least one described area-of-interest covers region locating for each moving object in the video image frame.
Central processing unit 1401 can be additionally configured to:
The binaryzation moving image (motion image) that the video image frame is generated based on foreground detection, to obtain the motion information of the video image frame.
Central processing unit 1401 can be additionally configured to:
Connected area segmentation processing is carried out to the binaryzation moving image, to obtain at least one connected domain of pixel, at least one described connected domain corresponds to region occupied by each moving object in the video image frame.
Central processing unit 1401 can be additionally configured to:
Generate the external contact zone of each connected domain.
Central processing unit 1401 can be additionally configured to:
The connected domain that mutual distance is less than or equal to first threshold is merged into a new connected domain.
Central processing unit 1401 can be additionally configured to:
According to the distance in the region, at least one described area-of-interest is generated.
Central processing unit 1401 can be additionally configured to:
Judge whether the quantity of the area-of-interest in the video image frame is less than or equal to third threshold value, and whether the area of the area-of-interest is less than or equal to the 4th threshold value;And
According to the judgement as a result, carrying out object detection in the area-of-interest of the video image frame or in the whole image range of the video image frame.
Central processing unit 1401 can be additionally configured to:
In the case where the area-of-interest to current video image frame carries out object detection, merged by the testing result in the area-of-interest of current video image frame, and for the testing result of the video image frame before current video image frame.
In addition, as shown in figure 14, electronic equipment 1400 can also include: input-output unit 1403 and display unit 1404 etc.;Wherein, similarly to the prior art, details are not described herein again for the function of above-mentioned component.It is worth noting that, electronic equipment 1400 is also not necessary to include all components shown in Figure 14;In addition, electronic equipment 1400 can also include the component being not shown in Figure 14, the prior art can be referred to.
The embodiment of the present application also provides a kind of computer-readable program, wherein described program makes the object test equipment or electronic equipment execute method for checking object as described in example 2 when executing described program in object test equipment or electronic equipment.
The embodiment of the present application also provides a kind of storage medium for being stored with computer-readable program, wherein, the storage medium stores above-mentioned computer-readable program, and the computer-readable program makes object test equipment or electronic equipment execute method for checking object as described in example 2.
Hardware, the software module executed by processor or both combination can be embodied directly in conjunction with the object test equipment that the embodiment of the present invention describes.For example, one or more combinations of one or more of functional block diagram shown in Fig. 1,2,8 and/or functional block diagram, both can correspond to each software module of computer program process, and can also correspond to each hardware module.These software modules can correspond respectively to each step shown in embodiment 2.These software modules are for example solidified using field programmable gate array (FPGA) and are realized by these hardware modules.
Software module can be located at the storage medium of RAM memory, flash memory, ROM memory, eprom memory, eeprom memory, register, hard disk, mobile disk, CD-ROM or any other form known in the art.A kind of storage medium can be coupled to processor, to enable a processor to from the read information, and information can be written to the storage medium;Or the storage medium can be the component part of processor.Pocessor and storage media can be located in ASIC.The software module can store in a memory in the mobile terminal, also can store in the storage card that can be inserted into mobile terminal.For example, the software module is storable in the flash memory device of the MEGA-SIM card or large capacity if equipment (such as mobile terminal) is using the MEGA-SIM card of larger capacity or the flash memory device of large capacity.
One or more groups of one or more of functional block diagram for the description of Fig. 1,2,8 and/or functional block diagram It closes, can be implemented as general processor for executing function described herein, digital signal processor (DSP), specific integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or it is any appropriately combined.For one or more combinations of one or more of Fig. 1-3 functional block diagram described and/or functional block diagram, it is also implemented as calculating the combination of equipment, for example, the combination of DSP and microprocessor, multi-microprocessor, the one or more microprocessors or any other this configuration combined with DSP communication.
Combine specific embodiment that the application is described above, it will be appreciated by those skilled in the art that these descriptions are all exemplary, it is not the limitation to the application protection scope.Those skilled in the art can make various variants and modifications to the application according to the principle of the application, these variants and modifications are also within the scope of application.

Claims (19)

  1. A kind of object detection (object detection) device, for detecting target object from video image frame, which includes:
    Extraction unit, the motion information based on video image frame extract area-of-interest from the video image frame;And
    Detection unit carries out object detection according to the area-of-interest that the extraction unit is extracted in the video image frame.
  2. Object test equipment as described in claim 1, wherein the extraction unit includes:
    Motion detection unit is used to detect the motion information in the video image frame;
    Area division unit is used for the motion information according to detected by motion detection unit, marks off region occupied by each moving object in the video image frame;And
    Generation unit generates at least one area-of-interest according to region occupied by each moving object in the video image frame, at least one described area-of-interest covers region locating for each moving object in the video image frame.
  3. Object test equipment as claimed in claim 2, wherein
    The motion detection unit generates the binaryzation moving image (motion image) of the video image frame based on foreground detection, to obtain the motion information of the video image frame.
  4. Object test equipment as claimed in claim 3, wherein
    The area division unit carries out connected area segmentation processing to the binaryzation moving image, and to obtain at least one connected domain of pixel, at least one described connected domain corresponds to region occupied by each moving object in the video image frame.
  5. Object test equipment as claimed in claim 4, wherein
    The area division unit generates the external contact zone of each connected domain.
  6. Object test equipment as claimed in claim 4, wherein
    The connected domain that mutual distance is less than or equal to first threshold is merged into a new connected domain by the area division unit.
  7. Object test equipment as claimed in claim 2, wherein
    The generation unit generates at least one described area-of-interest according to the distance in the region.
  8. Object test equipment as described in claim 1, wherein the detection unit includes:
    Judging unit, is used to judge whether the quantity of the area-of-interest in the video image frame to be less than or equal to third threshold value, and whether the area of the area-of-interest is less than or equal to the 4th threshold value;And
    Subject detecting unit carries out object detection in the area-of-interest of the video image frame or in the whole image range of the video image frame according to the judging result of the judging unit.
  9. Object test equipment as claimed in claim 8, wherein the detection unit further include:
    Combining unit, in the case where area-of-interest of the subject detecting unit to current video image frame carries out object detection, it is merged by the testing result in the area-of-interest of current video image frame, and for the testing result of the video image frame before current video image frame.
  10. A kind of electronic equipment, with object test equipment of any of claims 1-9.
  11. A kind of method for checking object, for detecting target object from video image frame, this method comprises:
    Motion information based on video image frame extracts area-of-interest from the video image frame;And
    According to the area-of-interest extracted, object detection is carried out in the video image frame.
  12. Method for checking object as claimed in claim 11, wherein extracting area-of-interest from the video image frame includes:
    Detect the motion information in the video image frame;
    According to detected motion information, region occupied by each moving object in the video image frame is marked off;And
    According to region occupied by each moving object in the video image frame, at least one area-of-interest is generated, at least one described area-of-interest covers region locating for each moving object in the video image frame.
  13. Method for checking object as claimed in claim 12, wherein the motion information detected in the video image frame includes:
    The binaryzation moving image (motion image) that the video image frame is generated based on foreground detection, to obtain the motion information of the video image frame.
  14. Method for checking object as claimed in claim 13, wherein according to detected motion information, marking off region occupied by each moving object in the video image frame includes:
    Connected area segmentation processing is carried out to the binaryzation moving image, to obtain at least one connected domain of pixel, at least one described connected domain corresponds to region occupied by each moving object in the video image frame.
  15. Method for checking object as claimed in claim 14, wherein according to detected motion information, draw Separate region occupied by each moving object in the video image frame further include:
    Generate the external contact zone of each connected domain.
  16. Method for checking object as claimed in claim 14, wherein according to detected motion information, mark off region occupied by each moving object in the video image frame further include:
    The connected domain that mutual distance is less than or equal to first threshold is merged into a new connected domain.
  17. Method for checking object as claimed in claim 12, wherein extracting area-of-interest from the video image frame includes:
    According to the distance in the region, at least one described area-of-interest is generated.
  18. Method for checking object as claimed in claim 11, wherein according to the area-of-interest extracted, carrying out object detection in the video image frame includes:
    Judge whether the quantity of the area-of-interest in the video image frame is less than or equal to third threshold value, and whether the area of the area-of-interest is less than or equal to the 4th threshold value;And
    According to the judgement as a result, carrying out object detection in the area-of-interest of the video image frame or in the whole image range of the video image frame.
  19. Method for checking object as claimed in claim 18, wherein according to the area-of-interest extracted, object detection is carried out in the video image frame further include:
    In the case where the area-of-interest to current video image frame carries out object detection, merged by the testing result in the area-of-interest of current video image frame, and for the testing result of the video image frame before current video image frame.
CN201680087601.8A 2016-09-30 2016-09-30 Method for checking object, object test equipment and electronic equipment Pending CN109479118A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/101204 WO2018058573A1 (en) 2016-09-30 2016-09-30 Object detection method, object detection apparatus and electronic device

Publications (1)

Publication Number Publication Date
CN109479118A true CN109479118A (en) 2019-03-15

Family

ID=61762403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680087601.8A Pending CN109479118A (en) 2016-09-30 2016-09-30 Method for checking object, object test equipment and electronic equipment

Country Status (2)

Country Link
CN (1) CN109479118A (en)
WO (1) WO2018058573A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021088689A1 (en) * 2019-11-06 2021-05-14 Ningbo Geely Automobile Research & Development Co., Ltd. Vehicle object detection

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584266B (en) * 2018-11-15 2023-06-09 腾讯科技(深圳)有限公司 Target detection method and device
CN110738101B (en) * 2019-09-04 2023-07-25 平安科技(深圳)有限公司 Behavior recognition method, behavior recognition device and computer-readable storage medium
CN111191730B (en) * 2020-01-02 2023-05-12 中国航空工业集团公司西安航空计算技术研究所 Method and system for detecting oversized image target oriented to embedded deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101198033A (en) * 2007-12-21 2008-06-11 北京中星微电子有限公司 Locating method and device for foreground image in binary image
CN101325690A (en) * 2007-06-12 2008-12-17 上海正电科技发展有限公司 Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow
CN101799968A (en) * 2010-01-13 2010-08-11 任芳 Detection method and device for oil well intrusion based on video image intelligent analysis
CN103020608A (en) * 2012-12-28 2013-04-03 南京荣飞科技有限公司 Method for identifying prisoner wears in prison video surveillance image
CN104573697A (en) * 2014-12-31 2015-04-29 西安丰树电子科技发展有限公司 Construction hoist lift car people counting method based on multi-information fusion
CN105957110A (en) * 2016-06-29 2016-09-21 上海小蚁科技有限公司 Equipment and method used for detecting object

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104167004A (en) * 2013-05-16 2014-11-26 上海分维智能科技有限公司 Rapid moving vehicle detection method for embedded DSP platform
US9405974B2 (en) * 2013-11-13 2016-08-02 Xerox Corporation System and method for using apparent size and orientation of an object to improve video-based tracking in regularized environments
CN103971381A (en) * 2014-05-16 2014-08-06 江苏新瑞峰信息科技有限公司 Multi-target tracking system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101325690A (en) * 2007-06-12 2008-12-17 上海正电科技发展有限公司 Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow
CN101198033A (en) * 2007-12-21 2008-06-11 北京中星微电子有限公司 Locating method and device for foreground image in binary image
CN101799968A (en) * 2010-01-13 2010-08-11 任芳 Detection method and device for oil well intrusion based on video image intelligent analysis
CN103020608A (en) * 2012-12-28 2013-04-03 南京荣飞科技有限公司 Method for identifying prisoner wears in prison video surveillance image
CN104573697A (en) * 2014-12-31 2015-04-29 西安丰树电子科技发展有限公司 Construction hoist lift car people counting method based on multi-information fusion
CN105957110A (en) * 2016-06-29 2016-09-21 上海小蚁科技有限公司 Equipment and method used for detecting object

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
赵池航,连捷,党倩: "《交通信息感知理论与方法 汉、英》", 30 September 2014, 东南大学出版社 *
邓继忠: "《数字图像处理技术》", 30 September 2005, 广东科技出版社 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021088689A1 (en) * 2019-11-06 2021-05-14 Ningbo Geely Automobile Research & Development Co., Ltd. Vehicle object detection

Also Published As

Publication number Publication date
WO2018058573A1 (en) 2018-04-05

Similar Documents

Publication Publication Date Title
US10803357B2 (en) Computer-readable recording medium, training method, and object detection device
KR101669713B1 (en) Object detection
US9940509B2 (en) Object detection method and object detection apparatus
JP6496987B2 (en) Target detection method and target detection apparatus
US8902053B2 (en) Method and system for lane departure warning
US20180018528A1 (en) Detecting method and device of obstacles based on disparity map and automobile driving assistance system
US9846823B2 (en) Traffic lane boundary line extraction apparatus and method of extracting traffic lane boundary line
EP3379509A1 (en) Apparatus, method, and image processing device for smoke detection
CN109727275B (en) Object detection method, device, system and computer readable storage medium
US20170230652A1 (en) Determination method and determination apparatus
CN109479118A (en) Method for checking object, object test equipment and electronic equipment
CN111382704A (en) Vehicle line-pressing violation judgment method and device based on deep learning and storage medium
US11017552B2 (en) Measurement method and apparatus
WO2018058530A1 (en) Target detection method and device, and image processing apparatus
CN104615972B (en) Intelligent identification method and device for pointer instrument
US11210773B2 (en) Information processing apparatus, information processing method, and storage medium for defect inspection and detection
US20160307050A1 (en) Method and system for ground truth determination in lane departure warning
US9508000B2 (en) Object recognition apparatus
US8494284B2 (en) Methods and apparatuses for facilitating detection of text within an image
TW201501080A (en) Method and system for object detection and tracking
KR20160037480A (en) Method for establishing region of interest in intelligent video analytics and video analysis apparatus using the same
US9639763B2 (en) Image target detecting apparatus and method
TW201536609A (en) Obstacle detection device
US20170006212A1 (en) Device, system and method for multi-point focus
CN107255470B (en) Obstacle detection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190315

WD01 Invention patent application deemed withdrawn after publication