CN101714210B - Image processing method, image processing device and image processing system - Google Patents

Image processing method, image processing device and image processing system Download PDF

Info

Publication number
CN101714210B
CN101714210B CN 200910224864 CN200910224864A CN101714210B CN 101714210 B CN101714210 B CN 101714210B CN 200910224864 CN200910224864 CN 200910224864 CN 200910224864 A CN200910224864 A CN 200910224864A CN 101714210 B CN101714210 B CN 101714210B
Authority
CN
China
Prior art keywords
pixel
image processing
brightness
detected object
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 200910224864
Other languages
Chinese (zh)
Other versions
CN101714210A (en
Inventor
伊藤寿雄
马场幸三
田福明义
斋藤拓
东野全寿
片桐卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to CN 200910224864 priority Critical patent/CN101714210B/en
Publication of CN101714210A publication Critical patent/CN101714210A/en
Application granted granted Critical
Publication of CN101714210B publication Critical patent/CN101714210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides an image processing method, an image processing device and an image processing system. The image processing method can detect detection objects such as nostrils of a driver and the like with high precision in a system using an on-vehicle camera as an example which is arranged on a vehicle and is used for shooting images of the face of the driver. The detection objects are diversified by various detection methods such as the following method and the like comprising the steps of: taking positions in a vertical direction during image shooting as candidates; detecting a plurality of positions; detecting the ranges of the candidates as the detection objects based on the brightness of pixels by aiming at pixel arrays arranged in a horizontal direction corresponding to each detected position respectively; and according to the lengths of the detected ranges, determining the detection objects from the candidates of the detection objects.

Description

Image processing method, image processing apparatus and image processing system
The application is that the original bill application number is 200580047809.9 application for a patent for invention (international application no: PCT/JP2005/002928, the dividing an application applying date: on February 23rd, 2005, denomination of invention: image processing method, image processing apparatus, image processing system and computer program).
Technical area
The present invention relates to a kind of image processing method that from pixel is arranged in the two dimensional image that the 1st different directions and the 2nd direction form respectively, detects specific detected object, be suitable for this image processing method image processing apparatus, have the image processing system of this image processing apparatus and the computer program that is used for realizing described image processing apparatus, particularly can improve image processing method, image processing apparatus, image processing system and the computer program of the accuracy of detection of detected object.
Background technology
Device as the driving of assisting vehicles such as automobile, such image processing apparatus has been proposed: driver's face is taken by the vehicle-mounted pick-up head that the position in the face that can take the driver disposes, according to resulting image, detect the image of the position of driver's the profile of face and eyes and handle (for example, referring to Patent Document 1).By using this device can constitute such system: can detect driver's situation, absent-minded or situation such as doze off when driving according to the driver, can driving assistance such as warn to it.In addition, because the situation of the face of the driver in extraneous lights such as the setting sun irradiation vehicle frequently occurs, so the illumination of the face of the driver in driving is not constant, but in order to make the brightness constancy of the image of taking driver face and obtaining, automatic gain function that can be by the vehicle-mounted pick-up head is carried out to a certain degree adjustment to it.
Patent documentation 1: TOHKEMY 2004-234367 communique
Summary of the invention
Yet, if extraneous light such as sunshine, reflected light shines driver face the samely, though then can deal with by the automatic gain function, if but the extraneous light difference of shining, then the automatic gain function of the brightness by adjusting integral image is to tackle the variation of local illumination.For example have the problem that produces following flase drop: during inclined to one side variations that only a left side half of irradiation sunshine of face caused, can not be identified as face to the dark-part that does not shine sunshine, be face mask but only bright part is detected.The light that shines driver face in the vehicle in travelling like this is constantly to change, so if without several different methods each position that face or face comprise is comprehensively judged, then can not obtain sufficient precision.
The present invention finishes in order to address the above problem just, its fundamental purpose provides a kind of image processing method, be suitable for the image processing apparatus of this image processing method, have the image processing system of this image processing apparatus and the computer program that is used for realizing described image processing apparatus, in this image processing method, when the detected object of the nose that from by the resulting image of taking etc. of processing, detects the personage etc., the position of the vertical direction during photographic images goes out a plurality of positions as couple candidate detection, at each pixel column of arranging in the horizontal direction corresponding with detected each position, detect the candidate's who becomes detected object scope based on the brightness of pixel, and based on the length of detected scope, by from the candidate of detected object, determining detected object, thereby make the detection method variation, improve accuracy of detection.
In addition, other purposes of the present invention are to provide a kind of following image processing method, image processing apparatus, image processing system and computer program: according at a pixel intensity, carry out based on the additive operation of the brightness of other adjacent each pixels and based on the horizontal direction when taking and the vertical direction with a described pixel the be separated by subtraction of brightness of pixel of position of predetermined distance and the result that obtains, detect detected object, thereby make the detection method variation, improve accuracy of detection.
In addition, other purposes of the present invention are to provide a kind of following image processing method, image processing apparatus, image processing system and computer program: detect detected object as the scope of horizontal direction and the scope of vertical direction, thereby make the detection method variation, improve accuracy of detection, wherein, the scope of described horizontal direction is based on that following result obtains: the variation of the brightness of the pixel that horizontal direction when taking is arranged is added to vertical direction, and the scope of described vertical direction is based on that following result obtains: the variation of the brightness of the pixel that will arrange in vertical direction is added to horizontal direction.
And, other purposes of the present invention are to provide a kind of following image processing method, image processing apparatus, image processing system and computer program: based on mean value and the dispersion value of brightness, determine the priority of detection method, thereby can select effective detection method corresponding to situation, therefore improve accuracy of detection.
The 1st inventive images disposal route, from being arranged in respectively the two dimensional image that obtains on the 1st different directions and the 2nd direction, pixel detects specific detected object, it is characterized in that, result according to additive operation and subtraction changes the brightness of a pixel, result based on conversion, detect detected object, wherein, described additive operation is based on that the brightness of other each pixels adjacent with a described pixel carries out, described subtraction be based on the 1st direction with a described pixel be separated by predetermined distance the position pixel brightness and be separated by with a described pixel on the 2nd direction that the brightness of pixel of position of predetermined distance carries out.
The 2nd inventive images disposal route, from being arranged in respectively the two dimensional image that obtains on the 1st different directions and the 2nd direction, pixel detects specific detected object, it is characterized in that, the numerical value that on the 2nd direction the brightness based on the pixel of arranging in the 1st direction is changed adds up, thereby derive the variation of the accumulated value on the 1st direction, the numerical value that on the 1st direction the brightness based on the pixel of arranging in the 2nd direction is changed adds up, thereby derive the variation of the accumulated value on the 2nd direction, scope according to the scope on the 1st direction and the 2nd direction, detect detected object, wherein, the variation that scope on described the 1st direction is based on the accumulated value on the 1st direction that derives obtains, and the variation that the scope of described the 2nd direction is based on the accumulated value on the 2nd direction that derives obtains.
The 3rd inventive images disposal route, from the two dimensional image that comprises a plurality of pixels, detect specific detected object by multiple detection method, it is characterized in that, the mean value of the brightness of calculating pixel, the dispersion value of the brightness of calculating pixel, according to the mean value that calculates and dispersion value, determine the priority of detection method.
The 4th inventive images treating apparatus, from being arranged in respectively the two dimensional image that obtains on the 1st different directions and the 2nd direction, pixel detects specific detected object, it is characterized in that, it comprises: the unit of carrying out following processing: the result according to additive operation and subtraction changes the brightness of a pixel, wherein, described additive operation is based on that the brightness of other each pixels adjacent with a described pixel carries out, described subtraction be based on the 1st direction with a described pixel be separated by predetermined distance the position pixel brightness and be separated by with a described pixel on the 2nd direction that the brightness of pixel of position of predetermined distance carries out; And detecting unit, it detects detected object according to the result of conversion.
The 5th inventive images treating apparatus is characterized in that, in the 4th invention, the value that described detecting unit detects after the conversion is minimum pixel, as detected object.
The 6th inventive images treating apparatus, from being arranged in respectively the two dimensional image that obtains on the 1st different directions and the 2nd direction, pixel detects specific detected object, it is characterized in that, it comprises: the 1st lead-out unit, it adds up to the numerical value that the brightness based on the pixel of arranging in the 1st direction changes on the 2nd direction, thereby derives the variation of the accumulated value on the 1st direction; The 2nd lead-out unit, it adds up to the numerical value that the brightness based on the pixel of arranging in the 2nd direction changes on the 1st direction, thereby derives the variation of the accumulated value on the 2nd direction; And detecting unit, it is according to the scope of the scope on the 1st direction and the 2nd direction, detect detected object, wherein, the variation that scope on described the 1st direction is based on the accumulated value on the 1st direction that described the 1st lead-out unit derives obtains, and the variation that the scope of described the 2nd direction is based on the accumulated value on the 2nd direction that described the 2nd lead-out unit derives obtains.
The 7th inventive images treating apparatus, it is characterized in that, in the 6th invention, described the 1st lead-out unit adds up to following index on the 2nd direction, derive the variation of the accumulated value on the 1st direction, described index is that basis is based on the numerical value in the luminance difference between the adjacent pixels on the 1st direction, and the numerical value that is illustrated in the brightness height of adjacent pixels on the 1st direction obtains, described the 2nd lead-out unit adds up to following index on the 1st direction, derive the variation of the accumulated value on the 2nd direction, described index is that basis is based on the numerical value in the luminance difference between the adjacent pixels on the 2nd direction, and the numerical value that is illustrated in the brightness height of adjacent pixels on the 2nd direction obtains, described detecting unit is according to the scope of the scope on the 1st direction and the 2nd direction, detect detected object, wherein, scope on described the 1st direction is the scope between the position of the accumulated value minimum that derives of position to the 1 lead-out unit of the accumulated value maximum that derives from described the 1st lead-out unit, and the scope of described the 2nd direction is the scope the position of the accumulated value minimum that derives to described the 2nd lead-out unit of the position of the accumulated value maximum that derives from described the 2nd lead-out unit.
The 8th inventive images treating apparatus detects specific detected object by multiple detection method from the two dimensional image that comprises a plurality of pixels, it is characterized in that it comprises: the unit of the mean value of the brightness of calculating pixel; The unit of the dispersion value of the brightness of calculating pixel; Determine the unit of the priority of detection method according to the mean value that calculates and dispersion value.
The 9th inventive images disposal system is characterized in that it comprises: each the described image processing apparatus during the 4th invention to the 8 invented; And the camera head that generates the image of being handled by this image processing apparatus, described detected object is the zone in the nostril that comprises the personage in the image that photographs by described camera head, described the 1st direction is horizontal direction, and described the 2nd direction is vertical direction.
The computer program of the 10th invention, make computing machine detect specific detected object from pixel is arranged in respectively the two dimensional image that obtains on the 1st different directions and the 2nd direction, it is characterized in that, make computing machine carry out following step: the result according to additive operation and subtraction changes the brightness of a pixel, wherein, described additive operation is based on that the brightness of other each pixels adjacent with a described pixel carries out, described subtraction be based on the 1st direction with a described pixel be separated by predetermined distance the position pixel brightness and be separated by with a described pixel on the 2nd direction that the brightness of pixel of position of predetermined distance carries out; And according to the result who changes, detect detected object.
The computer program of the 11st invention, make computing machine detect specific detected object from pixel is arranged in respectively the two dimensional image that obtains on the 1st different directions and the 2nd direction, it is characterized in that, make computing machine carry out following step: the numerical value that on the 2nd direction the brightness based on the pixel of arranging in the 1st direction is changed adds up, thereby derives the variation of the accumulated value on the 1st direction; The numerical value that on the 1st direction the brightness based on the pixel of arranging in the 2nd direction is changed adds up, thereby derives the variation of the accumulated value on the 2nd direction; And according to the scope of the scope on the 1st direction and the 2nd direction, detect detected object, wherein, the variation that scope on described the 1st direction is based on the accumulated value on the 1st direction that derives obtains, and the variation that the scope of described the 2nd direction is based on the accumulated value on the 2nd direction that derives obtains.
The computer program of the 12nd invention makes computing machine detect specific detected object by multiple detection method from the two dimensional image that comprises a plurality of pixels, it is characterized in that, makes computing machine carry out following step: the mean value of the brightness of calculating pixel; The dispersion value of the brightness of calculating pixel; And according to the mean value that calculates and dispersion value, determine the priority of detection method.
In the 1st invention, the 4th invention, the 5th invention, the 10th invention, in the image that is for example photographing, be under the situation of detected object with the nostril in the character facial, when horizontal direction and vertical direction are made as the 1st direction and the 2nd direction respectively, the position lower to the brightness of neighbor and its surrounding brightness is higher, be the conversion process that the low and little zone of brightness is emphasized, thereby can detect detected object accurately.Particularly with the additive method that detects detected object and time spent, can make the detection method variation and synthetically judge and the position of detected object therefore can further improve accuracy of detection.
In the 2nd invention, the 6th invention, the 7th invention, the 11st invention, for example in the image that photographs, peripheral region with nostril in the character facial is under the situation of detected object, when horizontal direction and vertical direction are made as the 1st direction and the 2nd direction respectively, respectively at vertical direction and horizontal direction, derivation brightness descends, and big part is big, the big less numerical value of part of brightness rising, detect according to the detected object of the numerical value of deriving to the dimetric zone around being lower than as brightness, can detect detected object accurately thus.Especially whether, detect the nostril, but detect the zone of below around the nostril, so even face being tilted to when being difficult to detect the angle in nostril, also can detect detected object.Particularly carry out and the time spent with additive method, can make the detection method variation and synthetically judge and the position of detected object therefore can further improve accuracy of detection.And, when detecting the position of the position of eyes, the position of nose etc. with additive method, can dwindle the zone based on the relation of the position between the detected position, on this basis, by carrying out the detection of corresponding invention, can detect detected object more accurately.
In the 3rd invention, the 8th invention and the 12nd invention, for example in the image that photographs, when the detected object in nostril in the character facial etc. is detected, mean value and dispersion value based on brightness, judge whether in character facial, to have produced the variation of local illumination, according to the priority of the situation decision detection method of judging, corresponding to situation, from various detection methods, select the high detection method of reliability and detection order, therefore can further improve accuracy of detection.
In the 17th invention, owing to can detect accurately and comprise the personage nostril in interior zone, so be applicable to such system: by for example driver's face being taken in the image that obtains at the first-class camera head of the vehicle-mounted pick-up that vehicle is installed, face with the driver is that detected object detects, thereby can detect driver's situation, be achieved and when the driver is absent-minded, it is warned etc. in the system of driving assistance.
Image processing method among the present invention, image processing apparatus, image processing system and computer program can be suitable for and be such mode: by for example driver's face being taken in the image that obtains at the first-class camera head of the vehicle-mounted pick-up that vehicle is installed, be detected object with the zone between the left and right sides wing of nose that comprises the nostril in the character facial.And, the processing that execution such as the image processing apparatus among the present invention are following: to adding up in the pixel intensity of arranging as the horizontal direction of the 1st direction, derivation is as the variation of the accumulated value of the vertical direction of the 2nd direction, variation according to the accumulated value of deriving detects as minimizing a plurality of positions, as the candidate who comprises detected object, further second differential is carried out in the variation of wherein accumulated value, scope is narrowed down within the specified quantity, respectively for the pixel column of arranging in the 1st direction corresponding with each position that dwindles, based on pixel intensity, detect the scope on the candidate's who becomes detected object the 1st direction, based on detected scope length, from the candidate of detected object, determine detected object.
The present invention is according to this structure, be conceived to the Luminance Distribution of vertical direction, detection comprises a plurality of candidates of the lower eyebrow of brightness, eyes and mouth, width based on horizontal direction, from the candidate, determine detected object, thereby can be detected good effect such as detected object accurately.Particularly the present invention carries out and the time spent with the additive method that detects detected object based on the Luminance Distribution of horizontal direction, can make detection method variation and synthetically judge and the position of detected object therefore obtain further to improve good effects such as accuracy of detection.
And, the present invention is by improving the accuracy of detection of detected object, correctly detect driver's situation, when being applicable to the system by the driving assistance that the absent-minded driving of driver sounded a warning etc., even when externally driving in the environment that the situation of light constantly changes, also can obtain constructing remarkable effects such as the few driving assistance system with reliability of flase drop.
In addition, in image processing apparatus of the present invention etc., when the pixel column of arranging as the horizontal direction of the 1st direction, detecting the candidate's who becomes detected object scope, variation sensing range based on the brightness of the pixel of arranging in the 1st direction, thus specifically, sharply change the filtering processing that part is emphasized by the brightness to horizontal direction, to based on arranging in the 1st direction, the result's at the two ends of the pixel column that brightness is lower scope detects, thereby can clearly comprise the nostril in interior zone, with eyebrow, eyes, the dimensional discrepancy of the horizontal direction between other positions such as mouth, remarkable effects such as the accuracy of detection that therefore can be improved.
In addition, in image processing apparatus of the present invention etc., detect the width of face, compare with the face width, for example will be that 22%~43% candidate is defined as detected object with respect to the scope of face's width, horizontal direction, thereby can clearly divide with other positions of eyebrow, eyes, mouth etc., therefore obtain to improve remarkable effects such as accuracy of detection.
And, in image processing apparatus of the present invention etc., even when having determined detected object, in the pixel column that the horizontal direction that comprises in specific detected object is arranged, the accumulated value of the brightness of the vertical direction relevant with detected object is minimizing, brightness is lower than the quantity at the position of surrounding brightness, the specified quantity of the quantity in the nostril of the quantity at position that namely is likely the nostril about less than expression is namely under 2 the situation, can judge that it is that the possibility in nostril is low, be judged as and be not detected object, obtain thus reducing the remarkable effects such as possibility that the position beyond the nostril are judged as the flase drop in nostril.
And, in image processing apparatus of the present invention etc., surpassed specified quantity if be arranged in the quantity of the pixel of vertical direction, then be judged as the possibility height that has detected spectacle-frame, obtain not being detected object by being judged as, and the remarkable effects such as possibility of reduction flase drop.
Image processing method of the present invention, image processing apparatus, image processing system and computer program, can be suitable for and be such mode: for example driver's face being taken in the image that obtains by the first-class camera head of the vehicle-mounted pick-up of installing at vehicle, is detected object with the nostril in the character facial.And, in the following processing of execution such as image processing apparatus of the present invention: the result according to additive operation and subtraction changes the brightness of a pixel, the pixel of the value minimum after the conversion is detected as detected object, wherein, described additive operation is based on that the brightness of other each pixels adjacent with a described pixel carries out, described subtraction be based on the horizontal direction of the 1st direction with a described locations of pixels be separated by predetermined distance the position pixel brightness and be separated by with a described locations of pixels on the vertical direction of the 2nd direction that the brightness of pixel of position of predetermined distance carries out.
According to structure of the present invention, the position lower to the brightness of neighbor and its surrounding brightness is higher, the i.e. conversion process that the low and little zone of brightness is emphasized, thus obtain detecting accurately remarkable effects such as nostril as the lower and little zone of brightness.Particularly in the present invention, when raising up, face can detect the nostril accurately, so carry out and the time spent with the additive method that detected object is detected, detection method variation be can make and the position of detected object, the effect that the accuracy of detection that can be further enhanced etc. are remarkable synthetically judged.
In addition, be applied to by improving the accuracy of detection of detected object in the present invention, correctly learn driver's situation, and when when the driver is absent-minded, it system of driving assistance such as being warned, even obtain externally travelling in the frequent environment that changes of light, also can construct remarkable effects such as flase drop driving assistance system few, that reliability is high.
Image processing method of the present invention, image processing apparatus, image processing system and computer program can be suitable for and be such mode: for example driver's face is taken in the image that obtains by the first-class camera head of the vehicle-mounted pick-up of installing at vehicle, with the zone of below around the nostril in the character facial as detected object.And, image processing apparatus of the present invention etc. are carried out following processing: in the vertical direction as the 2nd direction, to the numerical value based on the luminance difference between the adjacent pixels on as the horizontal direction of the 1st direction, and the numerical value that is illustrated in the brightness height of adjacent pixels on the 1st direction multiplies each other and the index that obtains adds up, derive the variation of the accumulated value on the 1st direction, on the 1st direction to based on the numerical value in the luminance difference between the adjacent pixels on the 2nd direction, and the numerical value that is illustrated in the brightness height of adjacent pixels on the 2nd direction multiplies each other and the index that obtains adds up, derive the variation of the accumulated value on the 2nd direction, according to the scope on from the position of each accumulated value maximum of deriving to the 1st direction the position of minimum and the scope of the 2nd direction, detect detected object.
The present invention is according to this structure, respectively at vertical direction and horizontal direction, derivation brightness descends, and big part is big, the big less numerical value of part of brightness rising, based on the numerical value of deriving, detected object around being lower than as brightness, tetragonal zone is detected, therefore obtain to detect accurately remarkable effects such as detected object.Particularly in the present invention, not to detect the nostril but the zone of detecting below around the nostril in the tetragonal zone around being lower than as brightness, even therefore face is tilted to the angle that is difficult to detect the nostril, also can obtain detecting remarkable effects such as detected object.And in the present invention, if detect the position of eyes, the positions such as position of nose with additive method, then can dwindle the zone based on the relation of the position between the detected position, on this basis, carry out the detection of corresponding invention, thereby obtain to detect more accurately the effect of brilliances such as detected object.Therefore, the present invention carries out and the time spent with the additive method that detected object is detected, and can make the detection method variation and synthetically judges and the position of detected object therefore obtain further improving remarkable effects such as accuracy of detection.
And, the present invention is by improving the accuracy of detection of detected object, be applicable to the situation that correctly detects the driver, and when the driver is absent-minded during to the system of its driving assistance of warning etc., even obtain driving in the environment that the situation of light externally constantly changes, also can construct remarkable effects such as the less driving assistance system with reliability of flase drop.
Image processing method of the present invention, image processing apparatus, image processing system and computer program can be suitable for and be such mode: for example driver's face is taken in the image that obtains by the first-class camera head of the vehicle-mounted pick-up of installing at vehicle, with the zone that comprises the nostril in the character facial as detected object.And image processing apparatus of the present invention etc. are carried out following processing: mean value that can calculating pixel brightness, and the dispersion value of calculating pixel brightness, based on mean value and the dispersion value that calculates, the priority of decision detection method from a plurality of detection methods.
In this structure of the present invention, mean value and dispersion value based on brightness, judge whether in character facial, to have produced local illumination change, determine the priority of detection method according to the judgement situation, thus corresponding to concrete condition, from various detection methods, select the high detection method of reliability and detection order, therefore obtain further to improve remarkable effects such as accuracy of detection.
Description of drawings
Fig. 1 is the block scheme of the configuration example of the image processing system in the expression embodiments of the present invention 1.
Fig. 2 is the process flow diagram of the processing example of the image processing apparatus that is used for image processing system in the expression embodiments of the present invention 1.
Fig. 3 is the process flow diagram of the processing example of the image processing apparatus that is used for image processing system in the expression embodiments of the present invention 1.
Fig. 4 conceptually is illustrated in image processing system in the embodiments of the present invention 1 begins the processing example till the candidate who detects detected object from the scope that determines image to handle key diagram.
Fig. 5 is that the image processing system that is shown schematically in the embodiments of the present invention 1 carries out the key diagram that the example of the scope of handling is detected in the end.
Fig. 6 is the key diagram of the example of employed coefficient during detect to handle the end of the image processing system in the embodiment 1 of this explanation of expression.
Fig. 7 is that the image processing system that is shown schematically in the embodiments of the present invention 1 carries out the key diagram that the example of the scope of handling is detected in the end.
Fig. 8 is that the image processing system that is shown schematically in the embodiments of the present invention 1 carries out the key diagram that the example of the scope of handling is detected in the end.
Fig. 9 is candidate's the key diagram of the detected object of the image processing system of expression in the embodiments of the present invention 1.
Figure 10 is the key diagram of conceptually representing the nostril region mark of the image processing system in the embodiments of the present invention 1.
Figure 11 is the block scheme of the configuration example of the image processing system in the expression embodiments of the present invention 2.
Figure 12 is the process flow diagram of processing example that is used for the image processing apparatus of image processing system in the expression embodiments of the present invention 2.
Figure 13 is the key diagram of setting example of conceptually representing the sensing range of the image processing system in the embodiments of the present invention 2.
Figure 14 is the key diagram of setting example of conceptually representing the hunting zone of the image processing system in the embodiments of the present invention 2.
Figure 15 is the key diagram that is illustrated in the coefficient example of the black region calculation of filtered processing that is used for image processing system in the embodiments of the present invention 2.
Figure 16 is the key diagram of the detection example handled of the black region calculation of filtered of image processing system that conceptually has been illustrated in use in the embodiments of the present invention 2.
Figure 17 is the block scheme of the configuration example of the image processing system in the expression embodiments of the present invention 3.
Figure 18 is the process flow diagram of example of the processing of the image processing apparatus that be used for image processing system of expression in the embodiments of the present invention 3.
Figure 19 is the key diagram of setting example of conceptually representing the hunting zone of the image processing system in the embodiments of the present invention 3.
Figure 20 is the key diagram of the example of the coefficient handled of the horizontal direction edge filter that be used for image processing system of expression in the embodiments of the present invention 3.
Figure 21 is the key diagram of the example of the coefficient handled of the vertical direction edge filter that be used for image processing system of expression in the embodiments of the present invention 3.
Figure 22 is the key diagram of the testing result of the image processing system in the expression embodiments of the present invention 3.
Figure 23 is the block scheme of the configuration example of the image processing system in the expression embodiments of the present invention 4.
Figure 24 is the process flow diagram of the processing example of the image processing apparatus 2 that is used for image processing system in the expression embodiments of the present invention 4.
Label declaration
1 camera head
2 image processing apparatus
31,32,33,34 computer programs
41,42,43,44 recording mediums
Embodiment
Following accompanying drawing according to the expression present embodiment describes the present invention in detail.
Embodiment 1.
Fig. 1 is the block scheme of the configuration example of the image processing system in the embodiments of the present invention 1.In Fig. 11 for being installed in the first-class camera head of vehicle-mounted pick-up in the vehicle, camera head 1 by private cable etc. communication line or the wired or wireless car that constitutes in LAN (Local AreaNetwok) communication network of etc.ing be connected with the image processing apparatus 2 that carries out the image processing.Camera head 1 is configured in the place ahead of the driver of handle in the vehicle, dashboard etc., can adjust shooting state, and what make driver face laterally and vertically becomes horizontal direction or vertical direction.
Comprise in the camera head 1: the MPU of control device integral body (Micro Processer Unit) 11, the ROM (Read Only Memory) 12 that the various computer programs carried out with being controlled to be the basis of MPU11 and data are recorded, when the computer program that execution ROM12 records, the temporary transient RAM (Random Access Memory) 13 that the various data that produce are recorded, the image pickup part 14 that uses CCD imaging apparatuss such as (Charge Coupled Device) to constitute, will be from the shooting of image pickup part 14 resulting simulated image data be converted to the A/D converter 15 of numerical data, to be converted to the frame memory 16 that digital view data is carried out placeholder record by A/D converter 15, be used for and image processing apparatus 2 between the communication interface 17 that communicates.
In the camera head 1, carry out photographing process continuously or intermittently by image pickup part 14, be that the basis for example generated 30 view data (picture frame) at per 1 second with the photographing process, and to A/D converter 15 outputs, in A/D converter 15, be each pixel transitions of composing images the Digital Image Data shown in the gray scale of 256 gray scales (1Byte, byte) etc., and be recorded in frame memory 16.The view data that records in frame memory 16 outputs to image processing apparatus 2 in predetermined timing from communication interface 17.Each pixel of composing images is arranged with two-dimensional approach, and view data comprises the data of each locations of pixels that illustrates on plane right-angle coordinate, the so-called xy coordinate system and each pixel intensity of expression that is expressed as gray-scale value.In addition, can not use xy coordinate system denotation coordination respectively at each pixel yet, and by being arranged in order in the data and denotation coordination.And the horizontal direction of image can be corresponding to the x direction of principal axis of view data, and the vertical direction of image is corresponding to the y direction of principal axis of view data.
Comprise in the image processing apparatus 2: the CPU of control device integral body (Central ProcessingUnit) 21, from the recording medium 41 of CD-ROM of various information such as the computer program 31 that recorded embodiment of the present invention 1 and data etc., read the auxiliary record portion 22 of the CD-ROM drive etc. of information, the hard disk (being designated hereinafter simply as HD) 23 that the various information that auxiliary record portion 22 is read record, when the computer program 31 that execution HD23 records, the various data that produce are carried out the RAM24 of placeholder record, the frame memory 25 that is constituted by volatile memory, be used for and camera head 1 between the communication interface 26 that communicates.
And, read the various information of computer program 31 of the present invention and data etc. from HD23, be recorded in the various programs that are included among the RAM24 in the computer program 31 by the CPU21 execution, the vehicle mounted computing machine moves as image processing apparatus 2 of the present invention.As the HD23 recorded data following data are arranged: the data relevant with computer program 31, data such as formula described later, wave filter, various constants for example, and expression detects detected object or the candidate's of detected object data etc.
In image processing apparatus 2, receive from the view data of camera head 1 output with communication interface 26, the Imagery Data Recording that receives in frame memory 25, is read the view data that frame memory 25 records, carry out various images and handle.Required various processing when the various images of carrying out at the view data that receives are handled the profile that refers to detect driver face from view data, eyes, nose.As concrete processing example, can enumerate such profile width and detect processing: the brightness of arranging in the vertical direction of image is added up, the threshold value of the value that adds up and regulation is compared, and sensed luminance is higher than the scope of the horizontal direction of the face mask that the pixel of background constitutes thus.In addition, this processing can also be enumerated such profile width and be detected and handle: differential is carried out in the variation to the horizontal direction of accumulated value, determines to change big position, detect brightness change greatly background and the border between the profile of face.Detailed contents processing is record to some extent in documents such as TOHKEMY 2004-234494 communique that for example the applicant applies for, TOHKEMY 2004-234367 communique.In addition, these images are handled, be not limited to the processing put down in writing in TOHKEMY 2004-234494 communique and the TOHKEMY 2004-234367 communique, can according to hardware configuration, and other application programs between the condition of association etc. suitably select.
Processing to the used various devices of the image processing system in the embodiments of the present invention 1 describes then.In the embodiments of the present invention 1, as use Fig. 1 illustrated, for example by taking in the image that obtains by the first-class 1 couple of driver's of camera head of the vehicle-mounted pick-up that is installed in vehicle face, with the zone between the left and right sides wing of nose that comprises the nostril in the driver face as detected object.Fig. 2 and Fig. 3 are process flow diagram image processing system, that represented the processing example of image processing apparatus 2 that is used in the embodiments of the present invention 1.In image processing apparatus 2, the control of CPU21 by the computer program 31 of executive logging in RAM24, from frame memory 25, extract and obtained by the shooting of camera head 1, the view data (S101) that receives by communication interface 26, detect processing by for example described profile width, from the view data of extracting, detect driver face width, the scope (S102) of horizontal direction (the 1st direction) of profile on border of namely representing the zone of face, based on detected result, the scope (S103) that the image that will carry out after setting is handled.The scope that the regional extent (width of profile) that step S102 is detected and image are handled is recorded among HD23 or the RAM24.
Image processing apparatus 2 is by the control of CPU21 then, the image of the scope that sets at the step S103 that extracts with step S101, pixel intensity that (the 1st direction) go up to arrange in the horizontal direction add up (S104), derive the variation (S105) of the accumulated value of vertical direction (the 2nd direction) by the result who adds up, the variation of the accumulated value on the vertical direction that derives, as corresponding to the position on the candidate's of detected object the vertical direction, detect expression minimizing a plurality of positions (S106).In step S106, not only can detect as the zone between the wing of nose of original left and right sides detected object, that comprise the nostril, also can detect the low eyebrow of brightness, eyes and mouth interior a plurality of candidates.
Image processing apparatus 2 is by the control of CPU21, second differential (S107) is carried out in the variation of the accumulated value that step S105 is derived, step S106 detected, the expression minimizing a plurality of positions in the middle of, according to second differential value order from low to high, be the position probing of the specified quantity of 10 grades the position (S108) that comprises the candidate of the detected object on the vertical direction.In that accumulated value is changed in the processing of the step S107 that carries out second differential, for example use following formula 1.The detected candidate of the step S106 of step S107 dwindles processing, by the processing of step S104~S108, can detect 10 candidates at most.Wherein, if during 10 of detected number of candidates less thaies of step S106, the detected candidate of step S108 is not necessarily less than 10.In addition, specified quantity be HD23 or RAM24 pre-recorded, according to the required numerical value that can change.Step S108 data detected, the expression candidate are recorded among HD23 or the RAM24 in addition.
Second differential value=P (y) 2-P (y-8)-P (y+8) ... formula 1
Wherein, y: the coordinate of vertical direction (y coordinate)
P (y): the accumulated value of position y
Then, image processing apparatus 2 is by the control of CPU21, from HD23 or RAM24 read with step S108 detected, as each position on the candidate's of detected object the vertical direction, respectively at the pixel column of arranging on corresponding with each position of reading, the horizontal direction, brightness with pixel is changed to the basis, detect the end on candidate's the horizontal direction of detected object, just hold about it (S109), based on end about detected, detect the scope (S110) of the candidate's who becomes detected object horizontal direction.In addition, the candidate for end about not detecting with step S109 just only detects a distolateral candidate and the candidate who does not detect the end, and it is removed from the candidate of detected object, and the content that HD23 or RAM24 record is updated.The processing of step S109 is as hereinafter described, and the point of the continuous pixels state interrupt around can being lower than brightness detects as the end.In addition, the processing of step S109~S110 is carried out at the candidate of all detected objects, carries out further the dwindling and the detection of scope of candidate of detected object.
Then, image processing apparatus 2 is by the control of CPU21, to what detected and recorded by step S110, the length of the scope of the candidate's of detected object horizontal direction with detected and recorded by step S102, the length of the scope of the horizontal direction of driver's face area (width of profile) compares, in the candidate of detected object, the length of the scope of horizontal direction has been entered the detected object in 22~43% scopes with respect to the length of the scope of the horizontal direction of driver's face area candidate, be defined as detected object (S111), in HD23 or RAM24 as the candidate of detected object and the content that records upgrade.In step S111, candidate for all detected objects carries out following judgement: the scope length of horizontal direction that namely comprises the candidate in the zone between the left and right sides wing of nose in nostril with respect to detected object, the surveyed area that has comprised detected object be face area horizontal direction length whether within the limits prescribed, here be within 22~43% the scope, the candidate who enters the detected object in the specialized range is detected as detected object.In addition in step S111, have a plurality ofly if enter the candidate of the detected object of specialized range, employed index (transverse edge mark described later) in then handling according to the detection of the scope shown in step S109~S110 is determined the candidate of detected object.In addition, be not fixed value as the numerical value of 22~43% shown in the specialized range, and can carry out suitable setting according to factors such as driver's ethnic groups.
Then, image processing apparatus 2 is by the control of CPU21, setting comprises the determined detected object with step S111, and the width of horizontal direction is consistent with detected object, the width of vertical direction is 3, the surveyed area (S112) of the determined pixel quantity of 5 grades, the pixel intensity of arranging in vertical direction in the surveyed area that sets is added up, the accumulated value of lead-out level direction changes (S113), the variation of the accumulated value of the horizontal direction of passing through to derive, detect minimal value (S114), detected minimal value quantity is counted (S115), judge that the minimal value quantity that counts out whether less than specified quantity, is 2 (S116) here.In step S115, if the minimal value quantity of counting is less than specified quantity (S116: be), then image processing apparatus 2 is by the control of CPU21, judging with the determined detected object of step S111 is falsity, can not detect detected object (S117), based on the result who judges, upgrade HD23 or the record content of RAM24, end process.If the detected object that is determined is the zone that comprises between the left and right sides wing of nose in nostril, then in the variation of the accumulated value that step S113 derives, the position suitable with the nostril is minimal value, so can count the minimal value more than 2.Be 0 or be under 1 the situation in minimal value for this reason, can judge that the detected object that is determined is falsity.In addition, because near the position of waiting the wing of nose profile becomes shade, may express minimal value, so even minimal value is more than 3, can not only judge accordingly that it is falsity.
In step S116, the minimizing quantity of counting is (S116: not) under the situation more than 2 as specified quantity, image processing apparatus 2 is by the control of CPU21, to comprise the pixel corresponding with minimal value, brightness and this pixel is identical and in the vertical direction continuous pixel quantity count (S118), judge whether the quantity of the continuous pixel of counting is predefined specified quantity above (S119).In step S118, judge the low pixel continuity in the vertical direction of brightness.In addition when the quantity of counting pixel, owing to the continuity that its objective is the pixel of judging that brightness is low, so not necessarily will only have the pixel of equal brightness to count to the pixel corresponding with minimal value.Particularly, if brightness is classified as 256 stages, represent as gray scale that the gray scale of then representing minimizing pixel is at 20 o'clock, hope can be counted continuity with the gray scale with amplitudes such as 20 ± 5.
In step S119, when the quantity of the continuous pixel of counting has surpassed predefined specified quantity (S119: be), image processing apparatus 2 is by the control of CPU21, judging the determined detected object of step S111 is falsity, and can not detect (S117) to detected object, result with judgement serves as that end process is upgraded to the record content of HD23 or RAM24 in the basis.Be specified quantity when above in the continuity of the low pixel of brightness, judge the eye glass frame that has been flase drop.
In step S119, the quantity of the continuous pixel of counting is predefined specified quantity (S119: not) when following, image processing apparatus 2 is by the control of CPU21, judging the determined detected object of step S111 is true value, and can detect (S120) to detected object, result with judgement serves as that end process is upgraded to the record content of HD23 or RAM24 in the basis.
In addition, dwindle processing by the candidate to detected objects such as step S109~S111, under the situation that the candidate of all detected objects is removed, also still be judged as and detect detected object.
Further describe the processing of the flowchart text that uses Fig. 2 and Fig. 3.At first, the processing to step S101~S108 describes.Fig. 4 begins the key diagram of the processing example till the candidate who detects detected object for conceptually representing decision image process range in the image processing system in the embodiments of the present invention 1.Represented to determine the situation of image process range among Fig. 4 (a), among Fig. 4 (a), the frame in the outside shown in the solid line is the integral image of the pictorial data representation that extracts of step S101, has represented driver's face image of being taken and as the zone between the left and right sides wing of nose that comprises the nostril of the driver's of detected object face.Line with the vertical direction (y direction of principal axis) of the image shown in the dot-and-dash line is the detected regional extent of step S102, the just width of driver's face mask.The scope that the image that the zone that upper and lower frames in the integral image shown in face mask width shown in the dot-and-dash line and the solid line fences up sets for step S103 is handled.
Fig. 4 (b) is illustrated among the step S104 pixel intensity of arranging is in the horizontal direction added up, and the curve map of the distribution of the accumulated value of the brightness of the vertical direction that in step S105, derives.Represented the distribution of accumulated value of brightness of the vertical direction of the image shown in Fig. 4 (a) among Fig. 4 (b), the longitudinal axis is that transverse axis has been represented the accumulated value of brightness corresponding to the coordinate of the vertical direction of Fig. 4 (a).The accumulated value of the brightness of the vertical direction shown in Fig. 4 (b) marks minimizing mode with arrow logo and changes according to reaching at positions such as eyebrow, eyes, nostril, mouths, by step S106, read the candidate's that can detect detected object situation according to minimal value.
Below the processing of step S109~S111 is described.Fig. 5 is that the image processing system that schematically shows embodiments of the present invention 1 carries out the key diagram that the example of the scope of handling is detected in the end.Fig. 6 detects the key diagram of the coefficient example of handling for the end of the image processing system of embodiments of the present invention 1 for expression.Fig. 5 has represented the candidate's of detected object neighboring pixel, the position on the horizontal direction of the numeral pixel shown in Fig. 5 top, and x coordinate just, the mark shown in the left are the position on the vertical direction of pixel, just y coordinate.Arrange on the transverse direction of Fig. 5, use from upper right to the represented y of the oblique line of lower-left sit target value be y, pixel column y represented the candidate of detected object, the represented y seat target value of the neighbouring oblique line left to bottom right of pixel column y and pixel column y is respectively applied to detect as the pixel column y+1 of y+1 and y-1 and the end of y-1.And, at each pixel that comprises among pixel column y, y+1 and the y-1, by multiply by coefficient shown in Figure 6, play the function of transverse edge wave filter, even in the horizontal direction in the pixel of Pai Lieing, the end in the zone of the low pixel of continued presence brightness makes clear.Coefficient shown in Figure 6 is 3 * 3 matrix, represented the coefficient that will multiply each other at nine pixel intensity, a pixel intensity and eight adjacent pixel intensity at the center, multiply by each self-corresponding coefficient, the transverse edge coefficient of the absolute value of its result's aggregate value as the pixel of center calculated.The transverse edge coefficient as shown in Figure 6, by obtaining to the numerical value that multiply by " 1 " in the brightness of left adjacent pixels and to the numerical value addition of multiply by " 1 " in right-hand adjacent pixels brightness.The zone of 3 * 3 shown in the thick line among Fig. 5 in addition, represented will to coordinate (2, the y+1) state of the corresponding transverse edge wave filter of the pixel shown in, (2, y+1) the transverse edge coefficient of the pixel shown in calculates by following formula 2 coordinate.
| P (3, y+2)+P (3, y+1)+P (3, y)-P (1, y+2)-P (1, y+1)-P (1, y) | formula 2
Wherein, x: the coordinate of horizontal direction (x coordinate)
Y: the coordinate of vertical direction (y coordinate)
P (x, y): coordinate (x, the brightness of pixel y)
Pixel as implied above, that the pixel column y that arranges on the horizontal direction at the candidate who becomes detected object and the pixel column y-1 neighbouring with it and y+1 comprise calculates the transverse edge coefficient.
Fig. 7 is for schematically illustrating the key diagram that the example of the scope of handling is detected in the end that carries out of image processing system in the implementation system 1 of the present invention.Numeral shown in the top of Fig. 7 is the position of the horizontal direction of pixel, x coordinate just, and the mark shown in the left is the position of the vertical direction of pixel, just the y coordinate.The y that arranges the in a lateral direction seat target value of Fig. 7 is the candidate that the pixel column y of y represents detected object, sit pixel column y+1 and the row y-1 that target value is respectively y+1 and y-1 for pixel column y and with the neighbouring y of pixel column y, calculate the transverse edge coefficient.Then, for each pixel shown in the oblique line from upper right to the lower-left except the pixel at two ends of pixel column y, nine pixels at the aggregate value of a pixel and eight pixels adjacent with this pixel compare transverse edge coefficient and predefined defined threshold.Then, the index of pixel quantity that expression transverse edge coefficient has been surpassed the threshold value of regulation calculates as the transverse edge mark of a pixel.3 * 3 the region representation that thick line among Fig. 7 surrounds coordinate (3, Y) the required pixel of transverse edge mark of the calculating pixel shown in.In nine pixels in the zone that thick line in Fig. 7 surrounds, have pixel quantity above the transverse edge coefficient of threshold value become coordinate (3, y) the transverse edge mark of the pixel shown in.
Then, be labeled as pixel more than 5 at the transverse edge of expression 0~9 value, within the scope of the candidate's who is judged as at detected object horizontal direction.Be that image processing apparatus 2 is in step S109, the pixel that transverse edge is labeled as the high order end more than 5 detects as the end in the candidate's of detected object horizontal direction left side, and the pixel that transverse edge is labeled as the low order end more than 5 detects as the end on the candidate's of detected object horizontal direction right side.Then, in step S110, based on end about being detected, detect the candidate's of detected object the scope of horizontal direction.
Fig. 8 is the key diagram that the example of the scope of handling is detected in the end that carries out that has schematically shown image processing system in the embodiment of the present invention 1.Fig. 8 has represented the candidate's of detected object pixel and transverse edge mark, the numeral shown in the top of Fig. 8 position, the x coordinate just of horizontal direction of pixel.In the example shown in Figure 8, be that the scope that is used as the horizontal direction of detected object detected till 5 pixel was 636 pixel from the x coordinate to the x coordinate.
Fig. 9 is candidate's the key diagram of having represented the detected object of the image processing system in the embodiments of the present invention 1.The candidate of detected object that Fig. 9 has represented the image of driver face and detected the scope of horizontal direction in the processing of step S109~S110.Among Fig. 9, * mark represented with step 109 detect about end, connect shown in the usefulness * mark about the line segment held represented the candidate of detected object.The candidate of end is not removed from detected object about this stage detects.In example shown in Figure 9, eyebrow, eyes, nostril, mouth and lower jaw position become the candidate of detected object.
Then, by the scope of the horizontal direction of driver's face area shown in Figure 4 is compared with the scope of the candidate's of detected object shown in Figure 9 horizontal direction, in step S111, from the candidate of detected object, determine detected object.In addition, in step S111, if the candidate of the detected object of determining is for a plurality of, then at each detected object of determining, in each pixel of arranging on the horizontal direction to each detected object of determining in formation, the transverse edge quantity that is labeled as the pixel more than the setting counts, with the shown index of the numerical value that obtains as the nostril region mark.Then, nostril region is labeled as maximum detected object as the true value detected object, other detected objects are excluded as falsity.
Figure 10 is for conceptually having represented the key diagram of the nostril region mark of the image processing system in the embodiment of the present invention 1.Figure 10 has represented pixel included in the scope of horizontal direction of the detected object that Fig. 8 is detected and the numerical value of representing the transverse edge mark of this pixel, surrounded the pixel of numerical value with O and represented that transverse edge is labeled as setting, and be value more than 5 at this.Then, the quantity that transverse edge is labeled as the pixel more than the setting is counted, as nostril region.
And, in image processing system of the present invention, by the later processing of step S112, can further judge the true and false of the detected object that is determined.
Various conditions that comprise the numerical value shown in the described embodiment 1 etc. only are an example, can carry out suitable setting by situations such as system architecture, purposes.
Embodiment 2.
Figure 11 is the block scheme of the structure example of the image processing system in the embodiments of the present invention 2.Among Figure 11,1 is camera head, and camera head 1 for example is connected with image processing apparatus 2 by private cable.Have MPU11, ROM12, RAM13, image pickup part 14, A/D converter 15, frame memory 16 and communication interface 17 in the camera head 1.
Have CPU21, auxiliary record portion 22, HD23, RAM24, frame memory 25 and communication interface 26 in the image processing apparatus 2, by auxiliary record portion 22 from the record of embodiments of the present invention 2 read various information and be recorded in HD23, RAM24 the various recording of information media 42 such as computer program 32 and data, carry out by CPU21, image processing apparatus 2 can be carried out the various steps of embodiment of the present invention 2 thus.
In addition, the detailed description of relevant each device, because identical with embodiment 1, thus be reference with embodiment 1, in this description will be omitted.
Then, the processing to the used various devices of the image processing system in the embodiments of the present invention 2 describes.In embodiments of the present invention 2, from the image that obtains, being detected object with the nostril in the driver face by for example being installed in that the first-class camera head of vehicle-mounted pick-up on the vehicle 1 is taken driver's face.Figure 12 is the process flow diagram of the processing example of the image processing apparatus 2 of having represented that image processing system in the embodiments of the present invention 2 is used.In the image processing apparatus 2, the control that the computer program 32 that records by the RAM24 of CPU21 is carried out, the view data (S201) of extracting from frame memory 25 that shooting by camera head 1 obtains and receiving via communication interface 26, detect processing by example profile width as shown in Embodiment 1, from the view data of extracting, detect the width as driver's face, the scope (S202) of horizontal direction of profile on border of namely representing the zone of face further detects zone between the left and right sides wing of nose that comprises the nostril as neighboring area, nostril (S203).The detected neighboring area, nostril of the scope of the horizontal direction of the profile that step S202 is detected and step S203 is recorded among HD23 or the RAM24.In step S203, detect in the method for neighboring area, nostril, for example use the method shown in the embodiment 1.
In addition, image processing apparatus 2 is by the control of CPU21, from the variation of the pixel intensity of the horizontal direction of the detected neighboring area, nostril of step S203, derivation obtain minimizing 2 as minimal point (S204), based on 2 minimal points of being derived, set the hunting zone (S205) that detects detected object.The hunting zone that step S205 sets is recorded among HD23 or the RAM24.If the minimal point of step S205 is more than 3, then little 2 of brightness are derived as minimal point.The scope that sets by step S205, at minimizing 2 points of deriving respectively, be in the horizontal direction about 15 pixels respectively, be the scope of distinguishing 5 pixels up and down in the vertical direction.In addition, at 2 minimal points that step S204 derives, can be considered as the point relevant with nostril, the left and right sides, in step S205, can be considered respectively the hunting zone has been set in nostril, the left and right sides.
Then, image processing apparatus 2 is by the control of CPU21, all pixels in the hunting zone that sets at step S205, handle by the black region calculation of filtered of carrying out additive operation and subtraction, change brightness (S206), value is after changing detected (S207) for minimum pixel as detected object, and with detected outcome record in HD23 or RAM24, wherein, described additive operation is that the brightness with other adjacent pixels is that carry out on the basis, described subtraction be separated by the in the horizontal direction brightness of pixel of position of the brightness of pixel of position of predetermined distance and the predetermined distance of being separated by in the vertical direction be that carry out on the basis.In the processing of step S206~S207, carry out at pixel contained in the hunting zone separately, nostril, the left and right sides.
Be described in more detail the processing that the flowchart text that utilizes Figure 12 is crossed.Figure 13 is the key diagram of setting example of conceptually representing the sensing range of the image processing system in the embodiment of the present invention 2.Among Figure 13, the represented lateral frame of solid line is the integral image shown in the view data extracted of step S201, the line of the vertical direction of the image shown in the dot-and-dash line (y direction of principal axis) is the detected regional extent of step S202, the just width of the profile of driver face.In addition, the line segment of the horizontal direction shown in the thick line is the zone between the left and right sides wing of nose that comprises the nostril that detects as the neighboring area in nostril among the step S203.
Figure 14 conceptually represents the key diagram of setting example of the hunting zone of the image processing system in the embodiments of the present invention 2.Among Figure 14, represented to comprise the nostril around image, the rectangular scope among Figure 14 shown in the solid line be in step S205, at nostril, the left and right sides the hunting zone of setting respectively.
Figure 15 handles the key diagram of the coefficient example of usefulness for the black region calculation of filtered of the image processing system in the expression embodiment of the present invention 2.By pixel included in the hunting zone be multiply by coefficient shown in Figure 15, make central authorities be low-light level and making clear for the zone of high brightness on every side.Black region calculating filter shown in Figure 15, to with the coefficient that multiplies each other as the brightness of the brightness of a pixel of converting objects and adjacent eight pixels, set " 1 ", in addition to with a pixel about the coefficient that the brightness of two pixels that the position of predetermined distance arranges multiplies each other of being separated by, set " 1 ", to with the pixel coefficient that the brightness of two pixels that the position of predetermined distance arranges respectively multiplies each other of being separated by up and down, set " 1 ".In addition, predetermined distance is set to 1/18 of zone that step S202 detects.
Figure 16 handles the key diagram of the detection example of usefulness for the black region calculation of filtered of conceptually having represented the image processing system in the embodiments of the present invention 2.Represented in the hunting zone that is set as shown in Figure 14 the black region calculation of filtered position when carrying out the processing of black region calculation of filtered as the detected detected object in the nostril in the left side among Figure 16 among Figure 16.In the black region calculating filter that sets in the condition shown in Figure 15, when carrying out the processing of black region calculation of filtered near the pixel the nostril central authorities shown in Figure 16, for the coefficient that is used in additive operation is positioned at the nostril, the coefficient that is used in subtraction is positioned at the outside, nostril, and the center in nostril is made clear.
The various conditions that comprise numerical value shown in the described embodiment 2 etc. only are an example, can suitably set according to situations such as system architecture, purposes.
Embodiment 3.
Figure 17 is the block scheme of the configuration example of the image processing system in the expression embodiment of the present invention 3.Among Figure 17,1 is camera head, and camera head 1 for example is connected with image processing apparatus 2 by private cable.Have MPU11, ROM12, RAM13, image pickup part 14, A/D converter 15, frame memory 16 and communication interface 17 in the camera head 1.
Have CPU21, auxiliary record portion 22, HD23, RAM24, frame memory 25 and communication interface 26 in the image processing apparatus 2, reading various information by auxiliary record portion 22 from various recording of information media 43 such as the computer program 33 that recorded embodiment of the present invention 3 and data is recorded in HD23, is recorded in RAM24, carry out by CPU21, image processing apparatus 2 can be carried out the various steps of embodiment of the present invention 3 thus.
The detailed description of relevant each device in addition, because identical with embodiment 1, thus be reference with embodiment 1, in this description will be omitted.
Processing to the used various devices of the image processing system in the embodiments of the present invention 3 describes then.In embodiments of the present invention 3, taken in the resulting image by the face that for example is installed in the first-class 1 couple of driver of camera head of vehicle-mounted pick-up on the vehicle, be detected object with the zone of below around the nostril of driver face.Figure 18 is the process flow diagram of the processing example of the used image processing apparatus 2 of the image processing system in the expression embodiment of the present invention 3.In the image processing apparatus 2, by the control of CPU21 to the computer program 33 carrying out RAM24 and record, from frame memory 25, extract by the shooting of camera head 1 resulting, the view data (S301) that receives via communication interface 26, detect processing by example profile width as shown in Embodiment 1, from the view data of extracting, detect face's width of driver, the scope (S302) of the horizontal direction of the profile of the zone boundary of i.e. conduct expression face, further detect the eyes and the nose that use processing such as pattern match and detect the position (S303) that processing detects eyes and nose, scope according to the horizontal direction of detected profile, and eyes and nose position, setting search scope (S304).In the scope of the horizontal direction of the profile that step S302 detects, be recorded among HD23 or the RAM24 in eyes that step S303 detects and nose position and in the hunting zone that step S304 sets.In addition, detailed content that detect to handle of the eyes in step S302 and nose is documented in the documents such as TOHKEMY 2004-234367 communique that the applicant applies for, TOHKEMY 2004-234494 communique.The hunting zone that step S304 sets is to be set to for example based on the zone as upper/lower positions: the upper end of vertical direction is the position under 1/16 distance of the horizontal direction width of profile from the mean value of the y coordinate of the position of the vertical direction of expression eyes, the lower end is the position under 3/8 the distance of profile width from the mean value of the y coordinate of eyes, the left end of horizontal direction plays left position into 1/8 distance of profile width for the x coordinate of horizontal direction position of expression nose, and right-hand member is from the position of nose x coordinate for the right side of 1/8 distance of profile width.
In addition, image processing apparatus 2 is by the control of CPU21, all pixels in the hunting zone that step S304 is set, derivation is as the horizontal direction element marking of index, this index is with based on the numerical value of the luminance difference between the adjacent pixels in the horizontal direction and represent (S305) that the numerical value of the brightness height of adjacent pixels in the horizontal direction multiplies each other and obtains, in the vertical direction the horizontal direction element marking of deriving is added up, thus the horizontal direction mark (S306) of deriving the index that changes as expression accumulated value in the horizontal direction.And then, in image processing apparatus 2, control by CPU21, to pixels all in the hunting zone, derivation is as the vertical direction pixel mark of index, this index is with based on the numerical value of the luminance difference between the adjacent pixels in vertical direction and represent that the numerical value of the brightness height of adjacent pixels multiplies each other in vertical direction and obtain (S307), in the horizontal direction the element marking on the vertical direction that derives is added up, thus the vertical direction mark (S308) of deriving the index that changes as the accumulated value on the expression vertical direction.In addition, image processing apparatus 2 is by the control of CPU21, the scope of the vertical direction between the peak to peak of the scope of the horizontal direction between the peak to peak of the horizontal direction mark that step S306 is derived and vertical direction mark that step S307 is derived, detect (S309) as detected object, and an outcome record that detects is in HD23 or RM24.
Further describe the processing that the flowchart text that utilizes Figure 18 is crossed.Figure 19 is the key diagram of setting example that the hunting zone of the image processing system in the embodiment of the present invention 3 conceptually is described.Among Figure 19, the represented lateral frame of solid line is the integral image shown in the view data extracted of step S301, the line of the vertical direction of the image shown in the dot-and-dash line (y direction of principal axis) is the detected regional extent of step S302, the profile width of driver face just, the represented position of * mark is the position of the detected eyes of step S303 and nose.In addition, the hunting zone that sets for step S304, the zone shown in the dotted line.
The horizontal direction element marking that step S305 derives is shown in following formula 3.
Sh ( x , y ) = H ( x , y ) &CenterDot; ( 255 - P ( x + 1 , y ) ; ( H ( x , y ) &GreaterEqual; 0 ) H ( x , y ) &CenterDot; ( 255 - P ( x - 1 , y ) ; ( H ( x , y ) < 0 ) Formula 3
Wherein: x: the coordinate of horizontal direction (x coordinate)
Y: the coordinate of vertical direction (y coordinate)
Sh (x, y): coordinate (x, the horizontal direction element marking of pixel y)
H (x, y): coordinate (x, the horizontal vertical direction edge filter result of pixel y)
P (x, y): coordinate (x, the brightness of pixel y)
Figure 20 handles the key diagram of the coefficient example of usefulness for the horizontal direction edge filter of the image processing system in the expression embodiment of the present invention 3.In formula 3, (x y) has represented the result that the horizontal direction edge filter that carries out with coefficient shown in Figure 20 is handled to H.Figure 20 as 3 * 3 matrix representation the coefficient that will multiply each other with the brightness of nine pixels, at the brightness of the pixel at center and eight adjacent pixel intensity, multiply by each self-corresponding coefficient, its result's aggregate value is calculated as the horizontal direction edge filter result of the pixel of center.When the edge filter that uses coefficient shown in Figure 20 to carry out horizontal direction is handled, by multiply by the numerical value of " 1 " and the numerical value addition of " 1 " is multiply by in right-hand adjacent pixels brightness the brightness of left adjacent pixels, obtain consequent index thus.Just handling by the horizontal direction edge filter, is that numerical value is obtained on the basis with the luminance difference between the adjacent pixels in the horizontal direction.
The formula that horizontal direction edge filter result is multiplied each other is the brightness of pixel to be divided into 256 stages and " 255 " of the highest numerical value of the gray-scale value that obtains deduct the value of the brightness of adjacent pixels in the horizontal direction from expression, namely represents the numerical value of brightness height.Like this, the horizontal direction element marking is based on the index that following numerical value obtains: based on the numerical value of luminance difference and the numerical value that is illustrated in the brightness height of horizontal direction adjacent pixels between the adjacent pixels in the horizontal direction.In addition, with formula that horizontal direction edge filter result multiplies each other in use brightness pixel positive and negative and different according to horizontal direction edge filter result.
And the horizontal direction mark that step S306 derives is expression by being the index that concerns between x coordinate and the accumulated value to the horizontal direction element marking horizontal direction position of deriving of adding up in the vertical direction.Just represented the variation in the horizontal direction of horizontal direction element marking by the horizontal direction mark.Particularly, in the horizontal direction in the mark, the value of the position that reduces significantly of pixel intensity becomes big in the horizontal direction, and the value in the position that pixel intensity rises significantly diminishes.
The vertical direction pixel mark that step S307 derives is represented with following formula 4.
Sv ( x , y ) = V ( x , y ) &CenterDot; ( 255 - P ( x , y + 1 ) ) ; ( V ( x , y ) &GreaterEqual; 0 ) V ( x , y ) &CenterDot; ( 255 - P ( x , y - 1 ) ) ; ( V ( x , y ) < 0 ) Formula 4
Wherein: Vh (x, y): coordinate (x, the vertical direction pixel mark of pixel y)
V (x, y): coordinate (x, the vertical direction edge filter result of pixel y)
P (x, y): coordinate (x, the brightness of pixel y)
Figure 21 handles the key diagram of the coefficient example of usefulness for the vertical direction edge filter of the image processing system in the expression embodiment of the present invention 3.In the formula shown in the formula 4, (x y) has represented the result that the vertical direction edge filter that carries out with coefficient shown in Figure 21 is handled to V.Figure 21 as 3 * 3 matrix representation the coefficient that will multiply each other to the brightness of nine pixels, a pixel intensity and eight adjacent pixel intensity at the center, multiply by each self-corresponding coefficient, its result's aggregate value is calculated as the vertical direction edge filter result of the pixel of center.Use as shown in figure 21 coefficient to carry out the vertical direction edge filter when handling, by multiply by the numerical value of " 1 " and the numerical value addition of " 1 " is multiply by in below adjacent pixels brightness top adjacent pixels brightness, obtaining the index that becomes the result.Just handling by the vertical direction edge filter, is that numerical value is obtained on the basis with the luminance difference between the adjacent pixels in vertical direction.
The formula that vertical direction edge filter result is multiplied each other is the brightness of pixel to be divided into 256 stages and " 255 " of the highest numerical value of the gray-scale value that obtains deduct the value of the brightness of adjacent pixels in vertical direction from expression, namely is the numerical value of expression brightness height.Like this, vertical direction pixel mark is based on the index that following numerical value obtains: based on the numerical value of luminance difference and the numerical value that is illustrated in the brightness height of vertical direction adjacent pixels between the adjacent pixels in the vertical direction.In addition, with formula that vertical direction edge filter result multiplies each other in used brightness pixel positive and negative and different according to vertical direction edge filter result.
And the vertical direction mark that step S308 derives is expression by being the index that concerns between y coordinate and the accumulated value to the vertical direction pixel mark vertical direction position of deriving of adding up in the horizontal direction.Just represented that by the vertical direction mark vertical direction pixel is marked at the variation on the vertical direction.Particularly, in the vertical direction in the mark, the value of the position that reduces significantly of pixel intensity becomes big in vertical direction, and the value in the position that pixel intensity rises significantly diminishes.
In addition in step S309, detect the zone of following setting: be the left end of horizontal direction with the maximal value of horizontal direction mark, minimum value is the right-hand member of horizontal direction, and the maximal value of vertical direction mark is the upper end of vertical direction, and minimum value is the lower end of vertical direction.
Figure 22 is the key diagram of the testing result of the image processing system in the expression embodiments of the present invention 3.Figure 22 has represented the hunting zone that sets by step S304, and the rectangular zone with shown in the oblique line that is surrounded by left end L, right-hand member R, upper end U and lower end D is the zone that is detected below on every side, nostril.
Comprise the various conditions of the numerical value shown in the described embodiment 3 etc., just an example can suitably be set according to situations such as system architecture, purposes.For example in described embodiment 3, show after the lead-out level bearing mark, derive the embodiment of vertical direction mark, but also can be for after the derivation vertical direction mark, the mode of lead-out level bearing mark can also be lead-out level bearing mark not, and the left end of horizontal direction and the right-hand member width as face mask, only derive mode of vertical direction mark etc. etc., can carry out various expansion.
Embodiment 4.
Figure 23 is the block scheme of the configuration example of the image processing system in the expression embodiment of the present invention 4.Among Figure 23,1 is camera head, and camera head 1 for example is connected with image processing apparatus 2 with private cable etc.Have MPU11, ROM12, RAM13, image pickup part 14, A/D converter 15, frame memory 16 and communication interface 17 in the camera head 1.
Have CPU21, auxiliary record portion 22, HD23, RAM24, frame memory 25 and communication interface 26 in the image processing apparatus 2, by auxiliary record portion 22, from various recording of information media 44 such as the computer program 34 that recorded embodiment of the present invention 4 and data, read various information and be recorded in HD23, RAM24, carry out by CPU21, image processing apparatus 2 is carried out the various steps of embodiment of the present invention 4 thus.In the image processing apparatus 2 of embodiment of the present invention 4, recorded the multiple detection method that comprises detection method illustrated in the embodiment of the present invention 1 to 3.
In addition, the detailed description of relevant each device, because identical with embodiment 1, thus be reference with embodiment 1, in this description will be omitted.
Then, the processing to various devices used in the image processing system in the embodiments of the present invention 4 describes.In the embodiments of the present invention 4, taking in the image that obtains by 1 couple of driver's of the first-class camera head of the vehicle-mounted pick-up that for example is installed in vehicle face, to comprise in the driver face that the zone in nostril is as detected object.Figure 24 is that expression is for the process flow diagram of the processing example of the image processing apparatus 2 of the image processing system of embodiments of the present invention 4.In image processing apparatus 2, the control of CPU21 by the computer program 34 of executive logging in RAM24, from frame memory 25, extract view data (S401) resulting by the shooting of camera head 1, that receive via communication interface 26, calculate the mean value (S402) of the pixel intensity that comprises in the view data of extracting, and as the result's who calculates mean value with compare (S403) at the prior preset threshold of average brightness.And in image processing apparatus 2, by the control of CPU21, the dispersion value of calculating pixel brightness (S404), as the result's who calculates dispersion value, with compare (S405) at the prior preset threshold of the dispersion value of brightness.Then in image processing apparatus 2, by the control of CPU21, based on the comparative result between the comparative result between average brightness and the threshold value and brightness dispersion value and the threshold value, determine the priority (S406) of the multiple detection method that records at HD23.
In embodiments of the present invention 4, carry out following processing: according to mean value and the dispersion value of brightness, determine the priority of detection method.Whether the priority that determines needs to carry out the order of multiple detection method and execution, according to the content that determines, carries out above-mentioned embodiment of the present invention 1 and even 3 illustrated detection methods, and other detection method.
The mean value of the brightness of view data and dispersion value according to the light status of driver face irradiation different changes bigger, so according to mean value and dispersion value judgement irradiation situation, select the most suitable detection method.Particularly, the various inclined to one side variation that causes if the generation sunshine only shines left side one side of something of face, the mean value of brightness is below the threshold value, and the dispersion value of brightness is more than the threshold value, therefore can judge inclined to one side variation has taken place, make that to be difficult to detection method that inclined to one side variation is impacted preferential.
For example do not have to produce under the state that changes partially, detect the neighboring area, nostril by the detection method shown in the embodiment 1, use detected result, detect the nostril with the detection method shown in the embodiment 2.By detecting the neighboring area, nostril, limiting becomes the scope that image is handled object, therefore improves processing speed, and improves accuracy of detection.If but produced inclined to one side variation, then because the possibility of generation luminance saturation is big, so the reliability of the processing that the transverse edge wave filter of use embodiment 1 carries out reduces.Therefore, preferentially detect the nostril according to the detection method shown in the embodiment 2.
In described embodiment 4, the mean value of brightness and the threshold value of dispersion value have been set one, but the present invention is not limited thereto, also can set a plurality of threshold values, and determine the priority of detection methods at various situations, also can decide the treatment conditions of handling required various setting values etc. be used to the image that detects based on mean value and dispersion value in addition.
In described embodiment 1 to 4, to being handled by the represented view data of plane right-angle coordinate, but the present invention is not limited thereto, can use at the view data of various coordinate systems, for example be applied to when comprising that the image that is configured to honey comb like pixel is handled the view data shown in the coordinate system that intersects by the angles that have 60 degree between the 1st direction and the 2nd direction etc.
In addition, the driver who has illustrated with vehicle in described embodiment 1 to 4 is the mode of detected object, but the present invention is not limited thereto, and can be detected object with have life thing or the xenobiotic beyond various personages and the personage also.
In addition, the mode that detects detected object the image that generates from the shooting of the camera head that uses the vehicle-mounted pick-up head has been shown in described embodiment 1 to 4, but the present invention is not limited thereto, can be applied to the image that generates that ins all sorts of ways by various devices is recorded among the HD, the various images that detect specific detected object from the image that records are handled.

Claims (9)

1. image processing method, this image processing method detects specific detected object from pixel is arranged in respectively the two dimensional image that obtains on the 1st different directions and the 2nd direction, and this image processing method comprises the step of carrying out following processing:
According to the result of additive operation and subtraction, the brightness of a pixel is changed; And
Result based on conversion detects detected object,
Wherein, described additive operation is based on that the brightness of other each pixels adjacent with a described pixel carries out, described subtraction be based on the 1st direction with a described pixel be separated by predetermined distance the position pixel brightness and be separated by with a described pixel on the 2nd direction that the brightness of pixel of position of predetermined distance carries out
Described conversion refers to possess the processing of transverse edge filter function.
2. image processing method, this image processing method detects specific detected object from pixel is arranged in respectively the two dimensional image that obtains on the 1st different directions and the 2nd direction, and this image processing method comprises the step of carrying out following processing:
The numerical value that on the 2nd direction the brightness based on the pixel of arranging in the 1st direction is changed adds up, thereby derives the variation of the accumulated value on the 1st direction;
The numerical value that on the 1st direction the brightness based on the pixel of arranging in the 2nd direction is changed adds up, thereby derives the variation of the accumulated value on the 2nd direction; And
Scope according to the scope on the 1st direction and the 2nd direction, detect detected object, wherein, the variation that scope on described the 1st direction is based on the accumulated value on the 1st direction that derives obtains, the variation that the scope of described the 2nd direction is based on the accumulated value on the 2nd direction that derives obtains
Handle the variation of deriving the accumulated value on described the 1st direction by the horizontal direction edge filter, handle the variation of deriving the accumulated value on described the 2nd direction by the vertical direction edge filter.
3. according to claim 1 or 2 described image processing methods, wherein,
Described image processing method also comprises the step of carrying out following processing:
Preparation is for detection of the multiple detection method of above-mentioned image;
Calculate the mean value of the brightness of the pixel in the above-mentioned image;
Calculate the dispersion value of the brightness of the pixel in the above-mentioned image;
When the above-mentioned mean value that calculates is below the 1st setting and the above-mentioned dispersion value that calculates is the 2nd setting when above, be judged to be inclined to one side variation has taken place, determine the priority of the above-mentioned multiple detection method prepared according to the result of this judgement; And
The detection method that employing is selected according to the above-mentioned priority that determines detects specific detected object from above-mentioned image.
4. image processing apparatus, this image processing apparatus detects specific detected object from pixel is arranged in respectively the two dimensional image that obtains on the 1st different directions and the 2nd direction,
This image processing apparatus comprises:
Converter section, it is according to the result of additive operation and subtraction, brightness to a pixel is changed, wherein, described additive operation is based on that the brightness of other each pixels adjacent with a described pixel carries out, described subtraction be based on the 1st direction with a described pixel be separated by predetermined distance the position pixel brightness and be separated by with a described pixel on the 2nd direction that the brightness of pixel of position of predetermined distance carries out; And
Test section, it detects detected object according to the result of conversion,
Described conversion refers to possess the processing of transverse edge filter function.
5. image processing apparatus as claimed in claim 4, wherein,
Described test section detects the pixel of the value after the conversion with minimum, as detected object.
6. image processing apparatus, this image processing apparatus detects specific detected object from pixel is arranged in respectively the two dimensional image that obtains on the 1st different directions and the 2nd direction,
This image processing apparatus comprises:
The 1st leading-out portion, it adds up to the numerical value that the brightness based on the pixel of arranging in the 1st direction changes on the 2nd direction, thereby derives the variation of the accumulated value on the 1st direction;
The 2nd leading-out portion, it adds up to the numerical value that the brightness based on the pixel of arranging in the 2nd direction changes on the 1st direction, thereby derives the variation of the accumulated value on the 2nd direction; And
Test section, it is according to the scope of the scope on the 1st direction and the 2nd direction, detect detected object, wherein, the variation that scope on described the 1st direction is based on the accumulated value on the 1st direction that described the 1st leading-out portion derives obtains, the variation that the scope of described the 2nd direction is based on the accumulated value on the 2nd direction that described the 2nd leading-out portion derives obtains
Handle the variation of deriving the accumulated value on described the 1st direction by the horizontal direction edge filter, handle the variation of deriving the accumulated value on described the 2nd direction by the vertical direction edge filter.
7. image processing apparatus as claimed in claim 6, wherein,
Described the 1st leading-out portion adds up to following index on the 2nd direction, derive the variation of the accumulated value on the 1st direction, described index is according to based on obtaining at the numerical value of the luminance difference between the adjacent pixels on the 1st direction and the numerical value that is illustrated in the brightness height of adjacent pixels on the 1st direction
Described the 2nd leading-out portion adds up to following index on the 1st direction, derive the variation of the accumulated value on the 2nd direction, described index is according to based on obtaining at the numerical value of the luminance difference between the adjacent pixels on the 2nd direction and the numerical value that is illustrated in the brightness height of adjacent pixels on the 2nd direction
Described test section is according to the scope of the scope on the 1st direction and the 2nd direction, detect detected object, wherein, scope on described the 1st direction is the scope between the position of the accumulated value minimum that derives of position to the 1 leading-out portion of the accumulated value maximum that derives from described the 1st leading-out portion, and the scope of described the 2nd direction is the scope the position of the accumulated value minimum that derives to described the 2nd leading-out portion of the position of the accumulated value maximum that derives from described the 2nd leading-out portion.
8. according to claim 4 or 6 described image processing apparatus, wherein,
Described image processing apparatus also comprises:
Preparation is for detection of the preparation portion of the multiple detection method of above-mentioned image;
Calculate the 1st calculating part of mean value of the brightness of the pixel in the above-mentioned image;
Calculate the 2nd calculating part of dispersion value of the brightness of the pixel in the above-mentioned image; And
The determination section of priority, it is below the 1st setting and the above-mentioned dispersion value that calculates is the 2nd setting when above at the above-mentioned mean value that calculates, and is judged to be inclined to one side variation has taken place, and determines the above-mentioned multiple detection method prepared according to the result of this judgement,
Described test section adopts the detection method of selecting according to the above-mentioned priority that determines, detects specific detected object from above-mentioned image.
9. image processing system,
This image processing system comprises:
The described image processing apparatus of in the claim 4 to 8 each; And
The camera head of the image that generation is handled by this image processing apparatus,
Wherein, described detected object is the zone in the nostril that comprises the personage in the image that photographs by described camera head,
Described the 1st direction is horizontal direction,
Described the 2nd direction is vertical direction.
CN 200910224864 2005-02-23 2005-02-23 Image processing method, image processing device and image processing system Active CN101714210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910224864 CN101714210B (en) 2005-02-23 2005-02-23 Image processing method, image processing device and image processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910224864 CN101714210B (en) 2005-02-23 2005-02-23 Image processing method, image processing device and image processing system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN2005800478099A Division CN101116106B (en) 2005-02-23 2005-02-23 Image processing process, image processing apparatus, image processing system

Publications (2)

Publication Number Publication Date
CN101714210A CN101714210A (en) 2010-05-26
CN101714210B true CN101714210B (en) 2013-08-21

Family

ID=42417848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910224864 Active CN101714210B (en) 2005-02-23 2005-02-23 Image processing method, image processing device and image processing system

Country Status (1)

Country Link
CN (1) CN101714210B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6372388B2 (en) * 2014-06-23 2018-08-15 株式会社デンソー Driver inoperability detection device
JP6999318B2 (en) * 2017-07-24 2022-01-18 ラピスセミコンダクタ株式会社 Imaging device and horizontal detection method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5859921A (en) * 1995-05-10 1999-01-12 Mitsubishi Denki Kabushiki Kaisha Apparatus for processing an image of a face

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5859921A (en) * 1995-05-10 1999-01-12 Mitsubishi Denki Kabushiki Kaisha Apparatus for processing an image of a face

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JP特开2003-108981A 2003.04.11
JP特开2004-180285A 2004.06.24
JP特开2004-80156A 2004.03.11

Also Published As

Publication number Publication date
CN101714210A (en) 2010-05-26

Similar Documents

Publication Publication Date Title
CN101116106B (en) Image processing process, image processing apparatus, image processing system
CN106128115B (en) A kind of fusion method based on twin camera detection Traffic Information
US8175806B2 (en) Car navigation system
CN101120379B (en) Image processing process and image processing system
CN102646274B (en) Lane boundary detecting device and lane boundary detecting method
CN107665327B (en) Lane line detection method and device
JP7025126B2 (en) Information processing equipment, information processing methods, and information processing programs
JP3945494B2 (en) Travel lane recognition device
JP2018022220A (en) Behavior data analysis system, and behavior data analysis device, and behavior data analysis method
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN112307840A (en) Indicator light detection method, device, equipment and computer readable storage medium
CN103927548A (en) Novel vehicle collision avoiding brake behavior detection method
CN115273023A (en) Vehicle-mounted road pothole identification method and system and automobile
CN107292214A (en) Deviation detection method, device and vehicle
CN103227905A (en) Exposure controller for on-vehicle camera
CN101714210B (en) Image processing method, image processing device and image processing system
CN101124610A (en) Image processing method, image processing system, image processing device and computer program
CN114663859A (en) Sensitive and accurate complex road condition lane deviation real-time early warning system
CN101978392B (en) Image processing device for vehicle
CN105530404A (en) Image recognizing apparatus and image recognizing method
JP2004086417A (en) Method and device for detecting pedestrian on zebra crossing
KR100969603B1 (en) A licence plate recognition method based on geometric relations of numbers on the plate
JP2008032557A (en) In-vehicle navigation apparatus and road-type determining method
JP2009134591A (en) Vehicle color determination device, vehicle color determination system, and vehicle color determination method
CN116363617A (en) Ultra-small curvature lane line detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant