CN104604219B - Image processing apparatus and image processing method - Google Patents

Image processing apparatus and image processing method Download PDF

Info

Publication number
CN104604219B
CN104604219B CN201480002199.XA CN201480002199A CN104604219B CN 104604219 B CN104604219 B CN 104604219B CN 201480002199 A CN201480002199 A CN 201480002199A CN 104604219 B CN104604219 B CN 104604219B
Authority
CN
China
Prior art keywords
image
attribute
involved
output
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201480002199.XA
Other languages
Chinese (zh)
Other versions
CN104604219A (en
Inventor
张海虹
劳世红
吉野广太郎
仓田刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Omron Corp
Original Assignee
Omron Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Omron Corp filed Critical Omron Corp
Publication of CN104604219A publication Critical patent/CN104604219A/en
Application granted granted Critical
Publication of CN104604219B publication Critical patent/CN104604219B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides the technology that the object being consistent taken in video image can be easily verified that as long as the objects such as the people or vehicle for distinguishing feature involved in appearance.Image processing processor (4) handles the video image of input picture input unit (3), for the object taken, determines whether to meet the attribute according to each of scheduled a variety of attributes.In addition, image processing processor (4) the attribute type for receiving object it is specified when, it generates output image and is exported from output section (7), in the output image, the object for meeting the attribute of accepted type is distinguished with other objects.

Description

Image processing apparatus and image processing method
Technical field
The present invention relates to confirm to meet specified attribute from object captured by video image (people or vehicle etc.) The technology of object (object).
Background technique
In the past, various places such as airport, station, shopping center, street corners were provided with surveillance camera.In addition, proposing Scheme as follows: it regard personage of wanted circular etc. as test object person, clapped in the shooting image by surveillance camera The face recognition that the face of the personage taken the photograph is compareed with the face of test object person, then detection shooting have test object person's It shoots image (referring to patent document 1 etc.).
In addition, when detecting captured test object person, also including to notices such as related organs in this device Shoot the detection notice of place, date, captured test object person of the shooting image etc..
In related organ etc., police etc. carry out the Dissatisfied rope of test object person according to the detection notice notified.
Existing technical literature
Patent document
Patent document 1: Japanese Unexamined Patent Publication 2004-62560 bulletin
Summary of the invention
Problems to be solved by the invention
But the calamities such as certain events or accident occur except the shooting area of surveillance camera sometimes.Such In the case of, because surveillance camera is not picked up by its scene, to confirm and the live periphery that calamity occurs are set more The shooting image of a surveillance camera, to find the personage as party.
The device proposed in above patent document 1 is equal is institute premised on to distinguish the face of test object person completely It can not be simply searched with the shooting image for surveillance camera and not distinguish that (calamity has occurred in the test object person of face Party).Therefore, people (witness) of the person of handling a case such as police from witnessing the calamity listens to the appearance spy of party there Sign (wear masks, wear dark glasses etc.), the shooting image of surveillance camera is confirmed in visual mode, is found captured such as party Such personage.In this way, not sentenced even if having distinguished feature involved in appearance according to the confirmation of the shooting image of surveillance camera The operation of the people of bright face also takes time to and labour.
It is an object of the present invention to provide can as long as the objects such as the people or vehicle for having distinguished feature involved in appearance Easily verify that the technology for the object being consistent captured by video image.
The means solved the problems, such as
Image processing apparatus of the invention is configured to following such structure to achieve the goals above.
Attribute determination unit handles the video image of input picture input unit, and is directed to captured object, according to predetermined Each of a variety of attributes determine whether to meet the attribute.Video image for input picture input unit is either prison Depending on the realtime graphic of the shooting area of the filming apparatus such as video camera shooting, and it can be the image recorded a video in medium.
In addition, attribute determination unit determines that the type of the object of attribute is, for example, the moving bodys such as people or vehicle.
In addition, a variety of attributes that attribute determination unit determines whether to meet can determine according to the type for the object for constituting object It is fixed.For example, when the type of object for constituting object is people, can decide whether to wear masks, whether involved by the appearances such as wear dark glasses Attribute;Or whether movement speed it is fast, whether with cannot take the state of face carry out it is mobile (whether to hide face Mode is moved) etc. attribute involved by states (moving condition).In addition, when constituting the type of object of object is vehicle, It can decide whether to be hidden in a manner of it can not take number plate, whether be hidden in the license plate number that table is remembered in number plate Whether (not hide number plate itself) the appearances institute such as is hidden in a manner of it can't see driver's face from vehicle frontal side The attribute being related to;Or whether movement speed is fast, drive in the wrong direction, whether be bent traveling, the states such as traveling of whether not turning on light (movement State) involved in attribute.
It exports image production part and generates output image, in the output image, attribute determination unit is judged to meeting attribute and refers to The object for determining the thingness that receiving portion is received is distinguished with other objects.Sentence for example, generating display properties in a view Determine portion be judged to meeting the object of thingness that attribute specifies receiving portion to be received image thumbnail image or by attribute Determination unit is judged to meeting the image of the object for the thingness that attribute specifies receiving portion to be received more emphatically than other images Display is highlighted image, as the output image.Also, output section output generates defeated in output image production part Image out.
The people that handles a case such as police can simply carry out taking in the video image of shootings such as surveillance camera People confirmation.For example, even the people that does not distinguish of face can also pass through attribute or state involved in the appearance distinguished Related attribute and reduce the scope.In addition it is possible to the vehicle taken in the video image of the shootings such as surveillance camera Etc. the objects of other types confirmed, can be contracted by attribute involved in attribute involved in the appearance distinguished or state Small range.
Therefore, as long as distinguishing the objects such as people or the vehicle of feature involved in appearance, it will be able to inhibit confirmation video figure The object the time it takes being consistent taken as in or labour.
In addition, image processing apparatus of the invention may be configured as being also equipped with attribute judgement result storage unit, which determines As a result storage unit is for each object taken in the video image of input picture input unit, by the image and attribute of the object Determination unit is mapped according to each judgement result for determining whether to meet the attribute of scheduled a variety of attributes and is stored.
According to this structure, object taken by the video image for input picture input unit can be carried out in advance , processing in attribute determination unit.As a result, when attribute specify receiving portion receive thingness it is specified when, can immediately begin to It exports and generates output image in image production part, can be realized the raising for the treatment of effeciency.
In addition, image processing apparatus of the invention may be configured as being also equipped with reproducing unit, which specifies defeated in selection When any object for including in the output image of portion's output out, is reproduced in the video image of input picture input unit and take this The position of object.
According to this structure, it can simply confirm the video image for having taken the object reduced by range of attributes.
Invention effect
According to the present invention, as long as having distinguished the objects such as people or the vehicle of feature involved in appearance, it will be able to easily The object being consistent captured by confirmation video image.
Detailed description of the invention
Fig. 1 is the block diagram for showing the structure of major part of image processing apparatus.
Fig. 2 is the figure for showing attribute and determining result database.
Fig. 3 is that the attribute shown in image processing apparatus determines that result database is made the flow chart of processing.
Fig. 4 is the flow chart for showing attribute determination processing.
Fig. 5 is the flow chart for showing the determination processing of the attribute other than movement speed.
Fig. 6 is the flow chart for showing output image and generating processing.
Fig. 7 is the figure for being shown as the display picture example of the thumbnail image of output image.
Fig. 8 is the flow chart for showing reproduction processes.
Specific embodiment
Hereinafter, illustrating the image processing apparatus of embodiments of the present invention.
Fig. 1 is the block diagram for showing this structure of the major part of image processing apparatus.This image processing apparatus 1 Have: control unit 2, image input unit 3, image processing processor 4, attribute determine that (attribute determines result data to result database 5 Library 5), operation portion 6 and output section 7.This image processing apparatus 1 is for the people shot in video image according to scheduled a variety of Each of attribute determines whether to meet the attribute, and generates output image using the judgement result and output it.
The movement in the control of control unit 2 each portion of 1 main body of image processing apparatus.
Image input unit 3 is entered video image (dynamic image).The video image can be the shooting such as surveillance camera The realtime graphic of shooting area captured by device (not shown), or it is also possible to the video recorded a video in hard disk and other media Image (has carried out compressed dynamic image data using MPEG2 etc.).
Image processing processor 4 has the structure for being equivalent to attribute determination unit described in the present invention, defeated to image is input to The video image for entering portion 3 is handled, and captured people is determined whether to accord with according to each of scheduled a variety of attributes Close the attribute.In this instance, as attribute involved in appearance specify whether to wear masks or whether wear dark glasses both.Separately Outside, specify whether that movement speed is fast or whether is moved with the state that cannot take face as attribute involved in state Move both.
It is described in detail below, image processing processor 4 intercepts the people's taken in inputted video image Face determines whether to meet attribute involved in appearance.In addition, image processing processor 4 is tracked in inputted video image The people taken determines whether attribute involved in match state (moving condition).
In addition, image processing processor 4 has the structure for being equivalent to output image production part described in the present invention, it is raw At the output image based on the judgement result for above-mentioned attribute.
The image processing processor 4 is equivalent to the meter for executing image processing method of the invention (executing image processing program) Calculation machine.
Attribute determines that result database 5 is database as follows: for taking in the video image inputted People registers the ID of identification this person, the face-image (face intercepted from the frame image for taking this person of this person in association Image), the reproduction start position of the judgement result that determines for scheduled a variety of attributes and the video image inputted (the time data for indicating the position that this person is taken in video image).Fig. 2 shows its attribute and determines result database The figure of structure.In this Fig. 2, 〇 expression meets corresponding attribute, × indicate not meeting corresponding attribute.In addition, for meeting The hiding people of face (people that ID shown in Fig. 2 is 0006), although can also be registered in this instance with non-registration face-image Image is can confirm that its dressing.Attribute determines that result database 5 is comparable to attribute described in the present invention and determines that result is deposited The structure in storage portion.
Operation portion 6 has the input equipments such as mouse or keyboard, receives operator and grasps to the input of 1 main body of image processing apparatus Make.Operator carries out specified related input operation of attribute type of aftermentioned object etc. in operation portion 6.At image procossing Reason device 4, which has, is equivalent to the structure that attribute described in the present invention specifies receiving portion, receives by operator in operation portion 6 The attribute type of the specified object of the input operation of progress.
The video figure that output section 7 exports the output image generated of image processing processor 4 or inputs to image input unit 3 As (reproducing image).Display device (not shown) is connected on output section 7.The display device shows the figure that output section 7 is exported Picture.
Hereinafter, illustrating the movement of the image processing apparatus 1.The image processing apparatus 1 executes attribute as shown below and determines Result database is made processing, output image generation processing, reproduction processes etc..
Fig. 3 is that the attribute shown in the image processing apparatus determines that result database is made the flow chart of processing.The image Processing unit 1 is handled involved by the video image of input picture input unit 3 sequentially in time using image processing processor 4 Frame image.
The object (s1) shot in the frame image of 4 detection processing object of image processing processor.In this instance, the object of detection Body is people.In the detection of object, such as each movement detected according to inter-frame difference image or background difference image Body is behaved the object detection of the shape as people by pattern match (pattern matching).
Image processing processor 4 be made according to detected in s1 everyone, it is corresponding to be associated with processing pair for identification The frame number of the frame image of elephant (is sat from the position on the shooting image and frame image of this person intercepted out in the frame image Mark) detection record (s2).
Image processing processor 4 is directed to everyone that detection record has been made in s2, determines whether and handled in last time The people (s3) detected also taken in the frame image of object.When be determined as in s3 be last time process object frame figure The people being not picked up by as in is (that is, in the right moment for camera of the frame image of last time process object and the frame image of this process object Right moment for camera between the people that occurs) when, image processing processor 4 assigns ID (s4) to this person.In addition, image processing processor 4 are made the object Mapping figure (object map) (s5) of people involved in the ID assigned in s4.In s5, assigned using in s4 The ID given come be made it is registered be made in s2 detection record object Mapping figure.For impart ID everyone distinctively It is made the object Mapping figure.
In addition, image processing processor 4, when being determined as in s3 is the people detected, judgement has been given to this person ID (s6).In addition, image processing processor 4 carries out additional registration to object Mapping figure involved in the ID determined in s6 The update (s7) of the object Mapping figure for the detection record being made in s2.
As described above, for be made in s2 detection record everyone carry out the processing of the s3~s7.
Image processing processor 4 extract taken in the frame image that last time deal with objects and this deal with objects frame The people (people for completing detection) (s8) being not picked up by image.
Image processing processor 4 is directed to the attribute determination processing that everyone extracted in s8 determine the attribute of this person (s9).In s9, for extracted in s8 everyone, determine whether to meet following 4 attributes:
(1) whether wear masks;
(2) whether wear dark glasses;
(3) whether movement speed is fast;
(4) whether moved with the state that cannot take face.
The attribute determination processing of the s9 is described in detail below.In addition, the attribute determination processing of the s9 and it is shown in Fig. 3 its It handles separation, is also configured to and the parallel execution of processing shown in Fig. 3.
Image processing processor 4 determines to whether there is untreated frame figure in the video image of input picture input unit 3 As (s10).When there are untreated frame image, the frame image update of process object is on the time by image processing processor 4 The frame image (s11) of a leading frame image, and it is back to s1.
In addition, when untreated frame image is not present in the video image of input picture input unit 3, image procossing processing Device 4 is at this moment without determining that the people of attribute carries out attribute determination processing (s12).S12 is processing identical with s9, setting S12 is in order to which the people taken in the last frame image for the video image in input picture input unit 3 also determines attribute.
Here the attribute determination processing of s9 and s12 is illustrated.Fig. 4 is the process for showing the attribute determination processing Figure.The processing of the s9 and 12 is equivalent to attribute determination step described in the present invention.
Image processing processor 4, which is directed to, will determine that each object of attribute repeatedly carries out processing shown in Fig. 4.That is, Fig. 4 indicates the attribute determination processing for being directed to 1 object.
The object Mapping figure (s21) that 4 sensing pin of image processing processor is made object.In the object Mapping figure, As described above, it is registered with according to each frame image for having taken the object, corresponding be associated with intercepts from the frame image The shooting image of object, position (coordinate) on frame image detection record.
Image processing processor 4 movement routine of test object person and mobile speed from the object Mapping figure that s21 is read It spends (s22).Movement routine is obtained according to the position (coordinate) on the frame image for each detection record registered in object Mapping figure ?.Movement speed is obtained according to the variable quantity of the position on frame image, therefore, in the distance and real space on frame image The relationship of distance is necessary.When the video image of input picture input unit 3 is clapped using the video camera that particular place is arranged in When the video image taken the photograph, can by preset the distance on frame image at a distance from real space between relationship, come Detect the movement speed of the object in real space.
On the other hand, in the video figure that the video image of input picture input unit 3 is not using the shooting of specific video camera Under the case where picture (situations different to the video camera shot according to the video image of situation input at this time), it is difficult in advance Set frame image on distance at a distance from real space between relationship.Due to such reason, it is configured in this instance The movement speed on real space is not detected and the movement speed on detection frame image.
Whether image processing processor 4 determines the object according to the movement routine of the object detected in s22 It is mobile (s23) to the direction that can take face.For example, the moving direction in object is to shoot taking the photograph for the video image backwards Camera and when separate direction, be determined as that the object is mobile to the direction that can not take face.In addition, in the shifting of object When dynamic direction is the direction for approaching the video camera for shooting the video image, it is determined as the object to the direction that can take face It is mobile.
When being determined as that object is moved to the direction that can take face in s23, image processing processor 4 is sentenced The fixed attribute (s24) whether met other than movement speed.In this instance, as attribute involved in appearance, determine above-mentioned (1) Whether wear masks and (2) whether wear dark glasses.In addition, determining (4) whether cannot take as attribute involved in state The state of face is moved.
When being determined as that object is not moved to the direction that can take face in s23, image procossing processing Processing of the device 4 without the s24.
Fig. 5 is the flow chart for showing the determination processing of the attribute other than the movement speed involved in the s24.At image Reason processor 4 is obtained using the detection record registered in the object Mapping figure of object between frame image continuous in time The mobile vector (s31) of object.The detection shooting of image processing processor 4 is input to taking the photograph for the video image of image input unit 3 The horizontal direction component (s32) of the angle of camera, it is the smallest with the angle of the straight line of horizontal direction component composition to achieving The frame image (the frame image for postponing a side on the time) of mobile vector is determined (s33).In s33, determine object person front The high frame image of a possibility that a possibility that towards video camera is high, can take the face of object.
Image processing processor 4 (remembers the image of the object intercepted out in the frame image determined from s33 in detection The image registered in record) carry out the 1st facial image detection processing (s34).The processing of 1st facial image detection is to be based on not wearing The processing of algorithm of the image of the people of mask as input and to image study face interception.
As long as image processing processor 4 is able to detect that face-image in the 1st facial image detection processing of s34, It is determined as that the object does not wear masks (s35, s36).On the other hand, as long as 1st face of the image processing processor 4 in s34 is schemed As that can not detect face-image in detection processing, just the image of the object intercepted from the frame image determined in s33 is carried out 2nd facial image detection handles (s37).The processing of 2nd facial image detection is based on using the image of the people to wear masks as defeated Enter and to the image study face interception algorithm processing.
In this instance, it is prevented by carrying out the processing of the 1st facial image detection and the processing of the 2nd facial image detection to wearing The people of mask carries out being not picked up by misinterpretation as face.
As long as image processing processor 4 is able to detect that face-image in the 2nd facial image detection processing of s37, It is determined as that the object masks one's nose and mouth with a cloth (s38, s39).On the other hand, as long as 2nd face of the image processing processor 4 in s37 is schemed As that can not detect face-image in detection processing, decide that with the presence or absence of the detection not handled the image intercepted out Record (untreated detection record) (s40).When there are untreated detection record, image processing processor 4 is to achieving The frame image of small mobile vector is determined with the angle second of the straight line composition of the horizontal direction component of the angle of video camera (s41), and to the frame image the later processing of above-mentioned s34 is carried out.
In addition, when be determined as in s40 there is no untreated detection record when, image processing processor 4 be determined as be Mobile object (s42) is carried out to be not picked up by the state of face.The s42 is attribute involved in the state of object Determine.
In addition, in this instance, being configured to the object for the 1st facial image detection processing detection using s34 to face Person is handled without the 2nd facial image detection, but is judged to not wearing masks, but can also execute the 1st facial image detection Processing and the processing of the 2nd facial image detection, determine according to the accuracy in detection of the face-image of each processing.
In addition, image processing processor 4 is directed to the object for being judged to not wearing masks in s36 and determines in s39 For the object to wear masks, determine its whether wear dark glasses (s43).In s43, when the face of object has spectacle-frame and eye The brightness on eyeball periphery (inside of the frame detected i.e. optic portion) is low predetermined compared with the brightness at the positions such as nose or cheek Value more than when, be determined as wear dark glasses.
When processing terminate by s24, attribute involved in the movement speed of 4 determine object person of image processing processor (s25).As described above, the structure of this movement speed for not detecting the object in real space instead of, detection frame image On object movement speed structure.Image processing processor 4 obtains the average movement speed Vave of the people on frame image. The average movement speed Vave is the movement speed that the more people shot in the video image for input picture input unit 3 detect The average value of V.In s25, the object is determined by comparing the movement speed V and average movement speed Vave of object Whether movement speed V is faster than other people.If specifically, movement speed V > (wherein, α is pre- by average movement speed Vave+ α Fixed correction constant), image processing processor 4 is it is determined that be the fast object of movement speed.
Image processing processor 4 is according to the judgement of the attribute determined in the process above as a result, being made to the object Record, attribute shown in Fig. 2 determines to register the record (s26) in result database 5.
Then, output image generation processing is illustrated.As described above, the video that will determine input picture input unit 3 The judgement the result whether people shot in image meets following 4 attributes is accordingly registered in attribute and determines in result database 5:
(1) whether wear masks;
(2) whether wear dark glasses;
(3) whether movement speed is fast;
(4) whether moved with the state that cannot take face.
Fig. 6 is the flow chart for showing output image and generating processing.
Operator carries out specified related input operation of the attribute type of object etc. in operation portion 6.Image procossing Processor 4 receives the attribute type (s51) of the specified object of the input operation carried out in operation portion 6 by operator.In s51 As long as attribute type of middle receiving 1 or more.The processing of the s51 is equivalent to that in the present invention described attribute is specified to be connect By step.
Image processing processor 4 searching attribute determines result database 5, and extracts to be judged to meeting and receive in s51 The object (s52) of category attribute.When the attribute type received in s51 is multiple, it is configured to image procossing processing Device 4 extracts the processing for being judged to meeting the object of received all categories attribute, or extracts to be judged to meeting and wish to connect The object for some category attribute received.
Image processing processor 4 generates the breviary for being shown in the face-image of each object extracted in s52 in a view Image is as output image (s53).The processing of the s53 is equivalent to output image generation step described in the present invention.At image Reason processor 4 exports the thumbnail image (s54) generated in s52 from output section 7.In the display device etc. being connect with output section 7 Middle display thumbnail image.The processing of the s54 is equivalent to output step described in the present invention.
Fig. 7 is the figure for showing the display picture example of the thumbnail image.Fig. 7 is that specified attribute type is " wearing masks " The example of situation.
Therefore, as long as having distinguished the people of feature involved in appearance (wear masks or wear dark glasses etc.), it will be able to inhibit true Recognize the people's the time it takes being consistent shot in video image or labour.In addition, using involved by the states such as movement speed is fast Attribute, the object to be confirmed can be reduced.
In addition, in above-mentioned example, although generating the face-image that display in a view meets the people of specified category attribute Thumbnail image image and exported as output, but can also show in a view in the video image of input picture input unit 3 The face-image of all personnel of shooting, and the people for meeting specified category attribute are generated and are carried out compared to other people The image that the wire of the image, such as face-image emphasized is thicker or color is different, as output image.
Then, reproduction processes are illustrated.Fig. 8 is the flow chart for showing the reproduction processes.
Receive specified (s61) of shown face-image in the output image shown in Fig. 7 of image processing processor 4. Image processing processor 4 judges to specify the ID (s62) of the people of face-image.Image processing processor 4 will be judged in s62 ID carry out searching attribute as keyword and determine result database 5, and obtain the reproduction start position (s63) of personnel being consistent. Image processing processor 4 reproduces the video image of input picture input unit 3 the reproducing positions obtained since in s63 (s64).The s64 is equivalent to the structure of reproducing unit described in the present invention.That is, image processing processor 4, which also has, is equivalent to this The structure of the described reproducing unit of invention.
In addition, the video image as reproduced objects, records (video recording) in the hard disk and other media connecting with image input unit 3 In (not shown).In addition, the medium can be built in image processing apparatus 1, or image processing apparatus 1 can be placed outside.
Police etc. handle a case people by specifying face-image as a result, can simply confirm and take the face-image The video image of related people.
Although in addition, determining whether to meet following 4 attributes in above-mentioned example:
(1) whether wear masks;
(2) whether wear dark glasses;
(3) whether movement speed is fast;
(4) whether moved with the state that cannot take face,
But the attribute type determined is not limited only to this.
In addition, in above-mentioned example, although the kind of object for constituting object is behaved but it is also possible to be other moving bodys.For example, When the kind of object for constituting object is set as vehicle, can determine the following contents as attribute involved in appearance:
(a) whether number plate is hidden;
(b) whether it is hidden in the license plate number that table is remembered in number plate and (does not hide number plate itself.);
(c) whether it is hidden in a manner of it cannot see that driver's face from vehicle frontal side,
It can determine using content below as attribute involved in state:
(d) whether movement speed is fast;
(e) whether drive in the wrong direction;
(f) whether it is bent traveling;
(g) whether do not turn on light traveling etc..
(a) be pattern match corresponding with the frame image of vehicle is had taken etc. processing, when the vehicle for not finding the vehicle When number plate, it can determine that hide number plate.In addition, (b) be pattern match corresponding with the frame image of vehicle is had taken etc. place Reason, although having found the number plate of the vehicle, in the Text region processing that the text (license plate number) remembered to table carries out, when When can not identify license plate number, it can determine that hide license plate number.In addition, (c) in the frame image from substantially front shooting vehicle In, driver both can not be detected when handling using the processing of above-mentioned 1st facial image detection and the 2nd facial image detection Face when, can determine that for by cannot see that driver face in a manner of hidden.
In addition, (d) judgement can be carried out similarly with the case where above-mentioned people.In addition, (e) and (f) can be according to the shifting of vehicle Dynamic path is determined.In addition, will can not light the vehicle of headlamp (f) under the situation that most of vehicle lights headlamp It is judged to being the vehicle of traveling of not turning on light.Headlamp whether light can according in captured frame image in the left and right of vehicle two Side determines with the presence or absence of region of high brightness.
In the case, as long as distinguishing the vehicle of feature involved in appearance, it will be able to inhibit in confirmation video image The vehicle the time it takes being consistent taken or labour.In addition, using attribute involved in the states such as movement speed is fast, energy Enough reduce the object to be confirmed.
In addition, the kind of object for constituting object is not limited only to above-mentioned people or vehicle, the movement of other types can also be Body.In addition, the kind of object for constituting object is not limited only to a kind, it is also possible to a variety of.In this case, it is preferable to according to composition pair Every kind of kind of object of elephant is dividually made attribute shown in Fig. 2 and determines result database 5.
Detailed description of the invention
1 ... image processing apparatus
2 ... control units
3 ... image input units
4 ... image processing processors
5 ... attributes determine result database (attribute judgement result database)
6 ... operation portions
7 ... output sections

Claims (3)

1. a kind of image processing apparatus, has:
Image input unit, inputted video image;
Attribute determination unit, processing inputs frame image involved in the video image of described image input unit, and is directed to and is clapped The object taken the photograph, attribute involved in the moving condition of attribute and object involved in the appearance for scheduled object, determines Whether the attribute is met;
Attribute specifies receiving portion, receives the specified of thingness, and the thingness includes belonging to involved in the appearance of object Attribute involved in property and the moving condition of object;
Image production part is exported, output image is generated, in the output image, the attribute determination unit is judged to meeting described The object for the thingness that attribute specifies receiving portion to be received is distinguished with other objects;
Output section exports the output image production part output image generated;
Attribute determines result storage unit, is directed to each object taken in the video image of input described image input unit, The image of the object, the attribute determination unit are determined whether to meet sentencing for the attribute according to each of scheduled a variety of attributes Determine result and the reproduction start position of video image that is inputted is mapped and is stored;And
Reproducing unit, when selecting to specify any object for including in the output image that the output section exports, described in retrieval Attribute determines result storage unit, obtains the position that the object is taken in the video image of input described image input unit, and The position for taking the object is reproduced,
The attribute determination unit be not on the basis of the average movement speed of the object on real space but on frame image extremely Determine whether the movement speed of object is faster than other objects less, attribute involved in the moving condition as object.
2. image processing apparatus according to claim 1, wherein
The output image production part generation shows that the attribute determination unit is judged to meeting the specified receiving of the attribute in a view The thumbnail image of the object for the thingness that portion is received is as the output image.
3. a kind of image processing method makes computer execute following step:
Attribute determination step, frame image involved in the video image to input picture input unit are handled, and are directed to institute Attribute involved in the moving condition of attribute involved in appearance of the object of shooting for scheduled object and object, determines Whether the attribute is met;
Attribute is specified to receive step, specifies in attribute and receives the specified of thingness in receiving portion, wherein the thingness packet Include attribute involved in the moving condition of attribute and object involved in the appearance of object;
Image generation step is exported, output image is generated and is judged to meeting in the attribute determination step in the output image The object of the specified thingness for receiving to receive in step of the attribute is distinguished with other objects;
Step is exported, the output image generated in the output image generation step is exported from output section;
Attribute determines result storing step, for each object for taking in the video image of input described image input unit, Determine whether to meet the category by the image of the object, in the attribute determination step according to scheduled a variety of attributes each Property judgement result and the video image inputted reproduction start position be mapped be stored in attribute determine result storage In portion;And
Reconstructing step, when selecting to specify any object for including in the output image that the output section exports, described in retrieval Attribute determines result storage unit, obtains the position that the object is taken in the video image of input described image input unit, and The position for taking the object is reproduced,
In the attribute determination step, it is not on real space but is with the average movement speed of the object on frame image Benchmark at least determines whether the movement speed of object is faster than other objects, attribute involved in the moving condition as object.
CN201480002199.XA 2013-07-12 2014-06-23 Image processing apparatus and image processing method Expired - Fee Related CN104604219B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2013-145981 2013-07-12
JP2013145981A JP6405606B2 (en) 2013-07-12 2013-07-12 Image processing apparatus, image processing method, and image processing program
PCT/JP2014/066526 WO2015005102A1 (en) 2013-07-12 2014-06-23 Image processing device, image processing method, and image processing program

Publications (2)

Publication Number Publication Date
CN104604219A CN104604219A (en) 2015-05-06
CN104604219B true CN104604219B (en) 2019-04-12

Family

ID=52279788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201480002199.XA Expired - Fee Related CN104604219B (en) 2013-07-12 2014-06-23 Image processing apparatus and image processing method

Country Status (3)

Country Link
JP (1) JP6405606B2 (en)
CN (1) CN104604219B (en)
WO (1) WO2015005102A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117835B (en) * 2017-06-26 2022-10-14 佳能株式会社 Image processing apparatus and method
JP7444423B2 (en) 2019-05-20 2024-03-06 i-PRO株式会社 Vehicle monitoring system and vehicle monitoring method
WO2022091297A1 (en) * 2020-10-29 2022-05-05 日本電気株式会社 Spectator monitoring device, spectator monitoring system, spectator monitoring method, and non-transitory computer-readable medium
KR102465437B1 (en) * 2022-05-30 2022-11-10 (주)이앤제너텍 Apparatus and method for tracking object based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004062868A (en) * 2002-07-10 2004-02-26 Hewlett-Packard Development Co Lp Digital camera and method for identifying figure in image
CN101980242A (en) * 2010-09-30 2011-02-23 徐勇 Human face discrimination method and system and public safety system
CN102045162A (en) * 2009-10-16 2011-05-04 电子科技大学 Personal identification system of permittee with tri-modal biometric characteristic and control method thereof
CN102890719A (en) * 2012-10-12 2013-01-23 浙江宇视科技有限公司 Method and device for fuzzy research of license plate numbers

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4541316B2 (en) * 2006-04-06 2010-09-08 三菱電機株式会社 Video surveillance search system
JP4944818B2 (en) * 2008-02-28 2012-06-06 綜合警備保障株式会社 Search device and search method
JP5523900B2 (en) * 2009-03-31 2014-06-18 綜合警備保障株式会社 Person search device, person search method, and person search program
JP5574692B2 (en) * 2009-12-17 2014-08-20 キヤノン株式会社 Video information processing method and apparatus
JP5400718B2 (en) * 2010-07-12 2014-01-29 株式会社日立国際電気 Monitoring system and monitoring method
JP5793353B2 (en) * 2011-06-20 2015-10-14 株式会社東芝 Face image search system and face image search method
CN102789690B (en) * 2012-07-17 2014-08-20 公安部道路交通安全研究中心 Illegal vehicle identifying method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004062868A (en) * 2002-07-10 2004-02-26 Hewlett-Packard Development Co Lp Digital camera and method for identifying figure in image
CN102045162A (en) * 2009-10-16 2011-05-04 电子科技大学 Personal identification system of permittee with tri-modal biometric characteristic and control method thereof
CN101980242A (en) * 2010-09-30 2011-02-23 徐勇 Human face discrimination method and system and public safety system
CN102890719A (en) * 2012-10-12 2013-01-23 浙江宇视科技有限公司 Method and device for fuzzy research of license plate numbers

Also Published As

Publication number Publication date
JP6405606B2 (en) 2018-10-17
WO2015005102A1 (en) 2015-01-15
CN104604219A (en) 2015-05-06
JP2015019296A (en) 2015-01-29

Similar Documents

Publication Publication Date Title
US10228763B2 (en) Gaze direction mapping
US8300935B2 (en) Method and system for the detection and the classification of events during motion actions
CN110717414A (en) Target detection tracking method, device and equipment
WO2020017190A1 (en) Image analysis device, person search system, and person search method
US9002054B2 (en) Device, system and method for determining compliance with an instruction by a figure in an image
US10331209B2 (en) Gaze direction mapping
CN104604219B (en) Image processing apparatus and image processing method
US20220414997A1 (en) Methods and systems for providing a tutorial for graphic manipulation of objects including real-time scanning in an augmented reality
Elhayek et al. Fully automatic multi-person human motion capture for vr applications
JP6950644B2 (en) Attention target estimation device and attention target estimation method
JP7198661B2 (en) Object tracking device and its program
CN112257617B (en) Multi-modal target recognition method and system
JP2002366958A (en) Method and device for recognizing image
US9767564B2 (en) Monitoring of object impressions and viewing patterns
WO2012153868A1 (en) Information processing device, information processing method and information processing program
Martin et al. An evaluation of different methods for 3d-driver-body-pose estimation
US11269405B2 (en) Gaze direction mapping
US20240046699A1 (en) Method, apparatus and system for customer group analysis, and storage medium
JP7098180B2 (en) Information processing equipment, information processing methods and information processing programs
Yun et al. Multi-view hand tracking using epipolar geometry-based consistent labeling for an industrial application
Wang et al. An Algorithm for Tracking Guitarists' Fingertips Based on CNN-Segmentation and ROI Associated Particle Filter.
CN115294623A (en) Human body whole body motion capture method and device, storage medium and terminal
Casanova et al. Interactive registration method for 3D data fusion
Dhankhar Review of Image Processing Techniques on Biometric System.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190412