WO2011043039A1 - Air conditioner - Google Patents

Air conditioner Download PDF

Info

Publication number
WO2011043039A1
WO2011043039A1 PCT/JP2010/005883 JP2010005883W WO2011043039A1 WO 2011043039 A1 WO2011043039 A1 WO 2011043039A1 JP 2010005883 W JP2010005883 W JP 2010005883W WO 2011043039 A1 WO2011043039 A1 WO 2011043039A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle
detection means
person
area
position determination
Prior art date
Application number
PCT/JP2010/005883
Other languages
French (fr)
Japanese (ja)
Inventor
恵子 岩本
智 佐藤
寧 神野
孝 杉尾
智貴 森川
博基 長谷川
裕介 河野
Original Assignee
パナソニック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニック株式会社 filed Critical パナソニック株式会社
Publication of WO2011043039A1 publication Critical patent/WO2011043039A1/en

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/30Control or safety arrangements for purposes related to the operation of the system, e.g. for safety or monitoring
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F2110/00Control inputs relating to air properties

Definitions

  • the present invention provides an indoor unit with an obstacle detection device that detects the presence or absence of an obstacle, and controls the wind direction change blades and the like based on the detection result of the obstacle detection device to efficiently send conditioned air. It relates to air conditioners.
  • the conventional air conditioner detects the indoor form in which the indoor unit is installed and the installation position of the indoor unit, and controls the air direction, the air volume, etc. based on the detected indoor form and the installed position of the indoor unit. Driving efficiently.
  • the left and right distance detection sensors and the front and lower distance detection sensors are provided in the indoor unit, the distance between the indoor unit and the right wall is measured by the right distance detection sensor, and the left distance detection sensor is used. While measuring the distance between the indoor unit and the left side wall, the installation position of the indoor unit is recognized by measuring the installation height of the indoor unit with the lower distance detection sensor.
  • the distance from the wall is measured to recognize the indoor form.
  • air conditioning control is efficiently performed by controlling the wind direction changing blade and the indoor fan according to the installation position of the indoor unit and the indoor form recognized in this way (see, for example, Patent Document 1). .
  • the present invention has been made in view of the above-described problems of the prior art, and by accurately recognizing the position of an obstacle in the area to be air-conditioned, the width of the air flow control is widened, and a comfortable air-conditioned space is created.
  • the object is to provide an air conditioner that can be realized.
  • the present invention is an air conditioner provided with an imaging device that detects the presence or absence of an obstacle, and divides a region to be air-conditioned into a plurality of obstacle position determination regions.
  • the obstacle presence / absence determination in each of the divided obstacle position determination areas is performed separately from the walls existing around the area to be air-conditioned.
  • the present invention is also an air conditioner provided with an imaging device that detects the presence or absence of a person and the presence or absence of an obstacle, and the imaging device detects the presence or absence of an obstacle only when the detection result of the person is negative. If there is a person detection result, the presence or absence of an obstacle is not detected.
  • the present invention can accurately recognize the positions of various obstacles existing in the room by the above-described configuration, and can widen the range of airflow control and realize a comfortable air-conditioned space.
  • FIG. 1 is a front view of an indoor unit of an air conditioner according to the present invention.
  • 2 is a longitudinal sectional view of the indoor unit of FIG. 3 is a longitudinal sectional view of the indoor unit of FIG. 1 with the movable front panel opening the front opening and the upper and lower blades opening the outlet.
  • 4 is a longitudinal sectional view of the indoor unit of FIG. 1 in a state where the lower blades constituting the upper and lower blades are set downward.
  • FIG. 5 is a flowchart showing the flow of the human position estimation process in this embodiment.
  • FIG. 6 is a schematic diagram for explaining background difference processing in human position estimation in the present embodiment.
  • FIG. 7 is a schematic diagram for explaining processing for creating a background image in background difference processing.
  • FIG. 8 is a schematic diagram for explaining processing for creating a background image in background difference processing.
  • FIG. 9 is a schematic diagram for explaining a process of creating a background image in the background difference process
  • FIG. 10 is a schematic diagram for explaining region division processing in human position estimation in the present embodiment.
  • FIG. 11 is a schematic diagram for explaining two coordinate systems used in this embodiment.
  • FIG. 12 is a schematic diagram showing the distance from the image sensor unit to the position of the center of gravity of the person.
  • FIG. 13 is a schematic diagram showing a human position determination area detected by the image sensor unit constituting the human body detection means.
  • FIG. 14 is a schematic diagram in the case where a person is present in the human position determination area detected by the imaging sensor unit constituting the human body detection means.
  • FIG. 15 is a flowchart for setting region characteristics in each region shown in FIG.
  • FIG. 16 is a flowchart for finally determining the presence or absence of a person in each area shown in FIG. 17 is a schematic plan view of a residence where the indoor unit of FIG. 1 is installed.
  • 18 is a graph showing the long-term cumulative result of each image sensor unit in the residence of FIG. 19 is a schematic plan view of another residence in which the indoor unit of FIG. 1 is installed.
  • FIG. 20 is a graph showing the long-term cumulative result of each image sensor unit in the residence of FIG.
  • FIG. 21 is a flowchart showing the flow of a human position estimation process using a process of extracting a person-like area from a frame image.
  • FIG. 22 is a flowchart showing the flow of human position estimation processing using processing for extracting a face-like region from a frame image.
  • FIG. 23 is a schematic diagram showing an obstacle position determination area detected by the obstacle detection means.
  • FIG. 24 is a schematic diagram for explaining obstacle detection by the stereo method.
  • FIG. 25 is a flowchart showing the flow of processing for measuring the distance to the obstacle.
  • FIG. 26 is a schematic diagram showing the distance from the image sensor unit to the position P.
  • FIG. 27 is an elevation view of a living space, and is a schematic diagram showing the measurement results of the obstacle detection means.
  • FIG. 28 is a schematic diagram showing the definition of the wind direction at each position of the left and right blades constituting the left and right blades.
  • FIG. 29 is a schematic plan view of a room for explaining a wall detection algorithm for determining a distance number by measuring a distance from an indoor unit to a surrounding wall surface.
  • FIG. 30 is a front view of another indoor unit of an air conditioner according to the present invention.
  • FIG. 31 is a schematic diagram showing the relationship between the image sensor unit and the light projecting unit.
  • FIG. 32 is a flowchart showing the flow of processing for measuring the distance to an obstacle using the light projecting unit and the image sensor unit.
  • FIG. 33 is a front view of another indoor unit of an air conditioner according to the present invention.
  • FIG. 34 is a flowchart showing the flow of processing of the human body distance detecting means using the human body detecting means.
  • FIG. 35 is a schematic diagram for explaining processing for estimating the distance from the image sensor unit to a person using v1 which is the v coordinate at the top of the image.
  • FIG. 36 is a flowchart showing the flow of processing of obstacle detection means using human body detection means.
  • FIG. 37 is a schematic diagram for explaining the process of estimating the height v2 of the person on the image using the distance information from the imaging sensor unit to the person estimated by the human body distance detection means.
  • FIG. 38 is a schematic diagram for explaining processing for estimating whether an obstacle exists between the image sensor unit and a person.
  • FIG. 39 is a schematic diagram for explaining processing for estimating whether an obstacle exists between the image sensor unit and a person.
  • the present invention provides an indoor unit provided with an imaging device provided with an obstacle detection means for detecting the presence or absence of an obstacle, and controls air direction changing means provided in the indoor unit based on a detection signal of the obstacle detection means. It is a harmony machine, Comprising: An imaging device performs the presence or absence determination of an obstruction according to the condition of a room.
  • the area to be air-conditioned is divided into a plurality of obstacle position determination areas, and the imaging device determines whether there is an obstacle in each of the divided obstacle position determination areas around the area to be air-conditioned.
  • the imaging apparatus further includes a human body detection unit that detects the presence or absence of a person, and a wind direction change provided in the indoor unit based on the detection signal of the human body detection unit and the detection signal of the obstacle detection unit If the human detection result by the human body detection means is negative, the human body detection means detects the presence or absence of an obstacle, and if the human detection result by the human body detection means is present, the human body detection means Does not detect the presence or absence of objects. Thereby, the position of an obstacle can be recognized quickly and accurately without recognizing a person as an obstacle. Therefore, the range of airflow control can be expanded and a comfortable air-conditioned space can be realized.
  • FIGS. 1 to 4 show the indoor unit of an air conditioner according to the present invention. ing.
  • the indoor unit has a main body 2 and a movable front panel (hereinafter simply referred to as a front panel) 4 that can freely open and close the front opening 2a of the main body 2, and the front panel 4 is the main body 2 when the air conditioner is stopped. While the front opening 2a is closed in close contact with the front, the front panel 4 moves in a direction away from the main body 2 to open the front opening 2a during operation of the air conditioner.
  • 1 and 2 show a state where the front panel 4 closes the front opening 2a
  • FIGS. 3 and 4 show a state where the front panel 4 opens the front opening 2a.
  • the heat exchanger 6 and the indoor air taken in from the front opening 2 a and the upper opening 2 b are heat-exchanged by the heat exchanger 6 and are indoors.
  • a filter 16 is provided between the front opening 2a and the upper surface opening 2b and the heat exchanger 6 for removing dust contained in the indoor air taken in from the front opening 2a and the upper surface opening 2b.
  • the upper part of the front panel 4 is connected to the upper part of the main body 2 via two arms 18 and 20 provided at both ends thereof, and a drive motor (not shown) connected to the arm 18 is driven and controlled.
  • a drive motor (not shown) connected to the arm 18 is driven and controlled.
  • the upper and lower blades 12 are composed of an upper blade 12a and a lower blade 12b, and are respectively swingably attached to the lower portion of the main body 2.
  • the upper blade 12a and the lower blade 12b are connected to separate driving sources (for example, stepping motors), and are independently angle-controlled by a control device (for example, a microcomputer) built in the indoor unit.
  • a control device for example, a microcomputer
  • the upper and lower blades 12 can be composed of three or more upper and lower blades. In this case, at least two (particularly, the uppermost blade and the lowermost blade) can be independently angle-controlled. Is preferred.
  • the left and right blades 14 are composed of a total of 10 blades arranged five by left and right from the center of the indoor unit, and are respectively swingably attached to the lower part of the main body 2.
  • the left and right five blades are connected to separate drive sources (for example, stepping motors) as a unit, and the left and right five blades are independently angle-controlled by a control device built in the indoor unit. .
  • a method for driving the left and right blades 14 will also be described later.
  • an imaging sensor unit 24 is attached as an imaging device to the upper part of the front panel 4, and the imaging sensor unit 24 is held by a sensor holder.
  • the imaging sensor unit 24 includes a circuit board, a lens attached to the circuit board, and an imaging sensor mounted inside the lens. Further, the human body detecting means determines the presence or absence of a person by a circuit board based on, for example, a difference process described later. That is, the circuit board acts as presence / absence determination means for determining the presence / absence of a person.
  • ⁇ Estimation of human position by image sensor unit> In order to estimate the human position by the image sensor unit 24, a difference method which is a known technique is used. This is to perform a difference process between a background image that is an image in which no person is present and an image captured by the image sensor unit 24, and to estimate that a person is present in a region where the difference is generated.
  • FIG. 5 is a flowchart showing the flow of human position estimation processing in the present embodiment.
  • a background difference process is used to detect pixels that have a difference in the frame image.
  • Background difference processing is a comparison between a background image captured under a specific condition and a captured image captured under the same imaging conditions such as the background image and the field of view, viewpoint, and focal length of the imaging sensor unit 24. This is a technique for detecting an object that does not exist in the image but exists in the captured image. In order to detect a person, an image without a person is created as a background image.
  • FIG. 6 is a schematic diagram for explaining the background difference processing.
  • FIG. 6A shows a background image.
  • the visual field is set to be substantially equal to the air-conditioned space of the air conditioner.
  • 101 indicates a window existing in the air-conditioned space
  • 102 indicates a door.
  • FIG. 6B shows a frame image captured by the image sensor unit 24.
  • the field of view, the viewpoint, the focal length, and the like of the image sensor unit 24 are equal to the background image of FIG.
  • Reference numeral 103 denotes a person existing in the air-conditioned space.
  • FIGS. 6A and 6B shows a difference image.
  • White pixels indicate pixels where no difference exists, and black pixels indicate pixels where a difference occurs. It can be seen that the area of the person 103 that is not present in the background image but is present in the captured frame image is detected as the area 104 where the difference occurs. That is, it is possible to detect a person area by extracting an area where a difference is generated from the difference image.
  • FIGS. 7A to 7C are schematic diagrams showing three consecutive frames of images taken by the imaging sensor unit 24 in a scene in which the person 103 is moving from right to left in front of the window 101.
  • FIG. is there.
  • FIG. 7B shows an image of the next frame of FIG. 7A
  • FIG. 7C shows an image of the next frame of FIG. 7B.
  • 8A to 8C show inter-frame difference images obtained by performing inter-frame difference processing using the image of FIG.
  • White pixels indicate pixels where no difference exists
  • black pixels 105 indicate pixels where a difference occurs.
  • FIGS. 9A to 9C are diagrams schematically showing the update of the background image in each frame of FIGS. 7A to 7C.
  • a hatched area 106 indicates an area where the background image has been updated
  • a black area 107 indicates an area where a background image has not yet been created
  • a white area 108 indicates an area where the background image has not been updated. That is, the total area of the black area 107 and the white area 108 in FIG. 9 is equal to the black area in FIG.
  • the black area 107 is gradually reduced and a background image is automatically created.
  • step S102 the obtained difference area is divided into areas, and if there are a plurality of persons, the difference areas are divided into a plurality of difference areas.
  • the difference image is determined according to the rule that “the pixel in which the difference occurs and the pixel in which the difference exists in the vicinity are the same region”. Can be divided into regions.
  • FIG. 10 is a schematic diagram in which this area division processing is executed.
  • FIG. 10A shows a difference image calculated by the difference process, and black pixels 111 and 112 are pixels in which a difference occurs.
  • FIG. 10B shows that when FIG.
  • step S103 the position of the detected person is detected by calculating the position of the center of gravity of each obtained area.
  • perspective projection conversion may be used.
  • FIG. 11 is a schematic diagram for explaining two coordinate systems.
  • the image coordinate system This is a two-dimensional coordinate system in the captured image, where the upper left pixel of the image is the origin, u is rightward, and v is downward.
  • a camera coordinate system which is a three-dimensional coordinate system based on the camera. In this case, the focal position of the image sensor unit 24 is the origin, the optical axis direction of the image sensor unit 24 is Zc, the camera upward direction is Yc, and the camera left direction is Xc.
  • the focal position of the image sensor unit 24 is the origin
  • the optical axis direction of the image sensor unit 24 is Zc
  • the camera upward direction is Yc
  • the camera left direction is Xc.
  • f is the focal length [mm]
  • (u0, v0) is the image center [Pixel] on the image coordinates
  • (dpx, dpy) is the size [mm / Pixel] of one pixel of the image sensor.
  • FIGS. 12A and 12B the center of gravity position of the person on the image is (ug, vg), and the three-dimensional position in the camera coordinate system is (Xgc, Ygc, Zgc).
  • FIG. 12A is a schematic view of the air-conditioned space viewed from the side
  • FIG. 12B is a schematic view of the air-conditioned space viewed from above.
  • H the height at which the image sensor unit 24 is installed
  • the Xc direction is equal to the horizontal direction
  • the optical axis Zc is installed at an angle ⁇ from the vertical direction.
  • the direction in which the image sensor unit 24 is facing is measured in a vertical direction (elevation angle, an angle measured upward from the vertical line) ⁇ and a horizontal angle (rightward from the front reference line as viewed from the indoor unit). Angle) ⁇ . Furthermore, when the height of the center of gravity of the person is h, the distance L from the imaging sensor unit 24 to the center of gravity position and the direction W, which are three-dimensional positions in the air-conditioned space, can be calculated by the following equations.
  • FIGS. 13A and 13B show in which area in the air-conditioned space a person exists when the center of gravity position on the image exists in each of the areas A to G.
  • FIG. FIGS. 14A and 14B are schematic diagrams when a person is present. In FIG. 14A, the position of the center of gravity of the person exists in the areas A and F. Therefore, it is determined that the person exists in the areas A and F of FIG. On the other hand, in FIG. 14B, since the position of the center of gravity of the person exists in the area D, it is determined that the person exists in the area D of FIG.
  • FIG. 15 is a flowchart for setting region characteristics to be described later in each of the regions A to G using the image sensor unit 24.
  • FIG. 16 is a flowchart of the regions A to G using the image sensor unit 24. It is a flowchart which determines whether a person exists in which area
  • step S1 the presence or absence of a person in each area is first determined by the above-described method at a predetermined cycle T1 (for example, 200 milliseconds if the frame rate of the image sensor unit 24 is 5 fps).
  • a predetermined cycle T1 for example, 200 milliseconds if the frame rate of the image sensor unit 24 is 5 fps.
  • each of the areas A to G is divided into a first area where the person is good (a place where the person is good) and a second area where the person is short (the area where the person simply passes, and the stay time is short). And a third area (a non-living area such as a wall or a window where people hardly go).
  • the first region, the second region, and the third region are referred to as a life category I, a life category II, and a life category III, respectively, and the life category I, the life category II, and the life category III are respectively a region characteristic I. It can also be said that the region of region characteristic II, region of region characteristic II, region of region characteristic III.
  • the life category I (region characteristic I) and the life category II (region characteristic II) are combined into a life region (region where people live), while the life category III (region characteristic III) is changed to a non-life region (
  • the area of life may be broadly classified according to the frequency of the presence or absence of a person.
  • FIG. 17 shows a case where the indoor unit of the air conditioner according to the present invention is installed in an LD of 1 LDK composed of one Japanese-style room, LD (living room / dining room) and kitchen, and is indicated by an ellipse in FIG. The area shows the well-placed place where the subject reported.
  • the presence / absence of a person in each of the regions A to G is determined every period T1, and 1 (with a reaction) or 0 (without a reaction) is output as a reaction result (determination) in the period T1, Is repeated a plurality of times, and in step S2, all sensor outputs are cleared.
  • step S3 it is determined whether or not the cumulative operation time of the predetermined air conditioner has elapsed. If it is determined in step S3 that the predetermined time has not elapsed, the process returns to step S1. On the other hand, if it is determined that the predetermined time has elapsed, the reaction results accumulated in the predetermined time in each of the regions A to G are two. Each region A to G is identified as one of the life categories I to III by comparing with the threshold value.
  • the first threshold value and the second threshold value smaller than the first threshold value are set, and in step S4, the long-term accumulation results of the respective regions A to G are obtained. It is determined whether or not it is greater than the first threshold value, and the region determined to be greater is determined to be the life category I in step S5. If it is determined in step S4 that the long-term accumulation result of each region A to G is less than the first threshold value, whether or not the long-term accumulation result of each region A to G is greater than the second threshold value in step S6.
  • the region determined to be large is determined to be the life category II in step S7, while the region determined to be small is determined to be the life category III in step S8.
  • the areas C, D, and G are determined as the life category I
  • the areas B and F are determined as the life category II
  • the areas A and E are determined as the life category III.
  • FIG. 19 shows a case where the indoor unit of the air conditioner according to the present invention is installed in another LD of 1 LDK, and FIG. 20 discriminates each region A to G based on the long-term accumulation result in this case. Results are shown.
  • the areas B, C, and E are determined as the life category I
  • the areas A and F are determined as the life category II
  • the areas D and G are determined as the life category III.
  • step S23 it is determined whether or not a predetermined number M (for example, 45 times) of reaction results in the cycle T1 has been obtained. If it is determined that the cycle T1 has not reached the predetermined number M, the process returns to step S21. If it is determined that the period T1 has reached the predetermined number M, in step S24, the total number of reaction results in the period T1 ⁇ M is used as the cumulative reaction period number, and the cumulative reaction period number for one time is calculated.
  • a predetermined number M for example, 45 times
  • step S27 by subtracting 1 from the number of times (N) of cumulative reaction period calculations and returning to step S21, the calculation of the cumulative reaction period number for a predetermined number of times is repeatedly performed.
  • Table 1 shows a history of reaction results for the latest one time (time T1 ⁇ M).
  • ⁇ A0 means the number of cumulative reaction periods for one time in the region A.
  • the cumulative reaction period number of one time immediately before ⁇ A0 is ⁇ A1
  • the previous cumulative reaction period number of ⁇ A0 is ⁇ A2,...
  • N 4
  • the past four history ( ⁇ A4, ⁇ A3 , .SIGMA.A2, .SIGMA.A1), for life category I it is determined that there is a person if the cumulative reaction period is one or more.
  • life category II it is determined that there is a person if the cumulative reaction period of one or more times is two or more in the past four history
  • life category III the past four history Among them, if the cumulative reaction period number of 2 times or more is 3 times or more, it is determined that there is a person.
  • the presence / absence of the person is similarly estimated from the past four histories, life categories, and cumulative reaction period times.
  • the region characteristics obtained by accumulating the region determination results for each predetermined period for a long period of time and the region determination results for each predetermined cycle are accumulated N times, and the cumulative reaction of each region obtained is obtained.
  • the area characteristics of each area A to G (life classification I to III) is determined, and the time required for presence estimation and the time required for absence estimation are changed according to the region characteristics of the regions A to G.
  • the time required for estimating the presence / absence of the area determined as the life category II as a standard in the area determined as the life category I, there is a person at a shorter time interval than the area determined as the life category II. In contrast, when there are no people in the area, the absence of the person is estimated at a longer time interval than the area determined as the life category II.
  • the time required for estimation is set to be long.
  • the presence of a person is estimated at a longer time interval than the area determined to be life category II.
  • the difference method is used as the human position estimation by the image sensor unit 24.
  • a person-like region may be extracted from the frame image using image data of the whole body of the person.
  • a technique using a HOG (Histograms of Oriented Gradients) feature amount or the like is widely known (N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Proc. IEEE Conf. On Computer Vision and Pattern Recognition, Vol.1, pp.886-893, 2005.).
  • HOG feature is a feature that focuses on the edge strength in each edge direction within the local region, and the person region is detected from the frame image by learning and identifying this feature using SVM (Support Vector Machine). It doesn't matter if you do.
  • FIG. 21 is a flowchart showing the flow of the human position estimation process using the process of extracting a human-like area from the frame image.
  • the same steps as those in FIG. 5 are denoted by the same reference numerals, and detailed description thereof is omitted here.
  • step S104 a human-like area is extracted as a human area in the frame image by using the HOG feature amount described above.
  • step S103 the position of the detected person is detected by calculating the position of the center of gravity of the obtained human area.
  • Equations 3 and 5 may be used as described above.
  • a face-like area may be extracted from the frame image.
  • a method using Haar-Like features is widely known (P. Viola and M. Jones, “Robust real-time face detection”, International Journal of Computer Vision, .57, no.2, pp.137-154, 2004.).
  • the Haar-Like feature amount is a feature amount focusing on the luminance difference between local regions, and this feature amount is learned and identified by SVM (Support Vector Machine) or the like to detect a person region from a frame image. It doesn't matter.
  • FIG. 22 is a flowchart showing a flow of a human position estimation process using a process of extracting a face-like area from a frame image.
  • the same steps as those in FIG. 5 are denoted by the same reference numerals, and detailed description thereof is omitted here.
  • step S105 a face-like area is extracted as a face area in the frame image by using the Haar-Like feature amount described above.
  • step S103 the position of the detected person is detected by calculating the position of the center of gravity of the obtained face area.
  • perspective projection conversion may be used.
  • the obstacle detection means for detecting an obstacle using the above-described imaging sensor unit 24 will be described.
  • the term “obstacle” refers to all objects that are blown out from the air outlet 10 of the indoor unit and impede the flow of air to provide a comfortable space for residents. It is a collective term for non-residents such as furniture such as sofas, televisions, and audio.
  • the floor surface of the living space is subdivided as shown in FIG. 23 based on the vertical angle ⁇ and the horizontal angle ⁇ as shown in FIG.
  • Each of these areas is defined as an obstacle position determination area or “position” to determine in which position an obstacle exists.
  • all the positions shown in FIG. 23 substantially coincide with the whole area of the human position determination area shown in FIG. 13B, and the area boundary in FIG. 13B substantially coincides with the position boundary in FIG.
  • the number of position areas is set larger than the number of areas of the human position determination area, and at least two positions belong to each of the human position determination areas, and these at least two obstacle position determinations.
  • the air conditioning control can be performed by dividing the area so that at least one position belongs to each person position determination area.
  • each of the plurality of person position determination areas is divided according to the distance to the indoor unit, and the number of areas belonging to the person position determination area in the near area is determined as the person position determination in the far area.
  • the number of positions belonging to the area is set to be larger than the number of areas belonging to the area, but the number of positions belonging to each person position determination area may be the same regardless of the distance from the indoor unit.
  • the air conditioner according to the present invention detects the presence or absence of a person in the regions A to G by the human body detection means, and detects the presence or absence of an obstacle in the positions A1 to G2 by the obstacle detection means. Based on the detection signal (detection result) of the human body detection means and the detection signal (detection result) of the obstacle detection means, the comfortable space is provided by driving and controlling the upper and lower blades 12 and the left and right blades 14 as the wind direction changing means. I am doing so.
  • the human body detection means can detect the presence or absence of a person by using an object that moves, for example, by detecting an object that moves in the air-conditioned space. Since the distance between the obstacles is detected by the image sensor unit 24, the person and the obstacle cannot be distinguished.
  • the area where the person is located may not be air-conditioned, or the person may be directly conditioned by air-conditioning airflow (airflow), resulting in inefficient air conditioning control or discomfort to the person. There is a risk of air conditioning control.
  • the obstacle detection means detects the obstacle only by performing the data processing described below.
  • FIG. 24 is a schematic diagram for explaining obstacle detection by the stereo method.
  • the distance to a point P that is an obstacle is measured using the image sensor units 24 and 26.
  • f is the focal length
  • B is the distance between the focal points of the two image sensor units 24 and 26
  • u1 is the u coordinate of the obstacle on the image of the image sensor unit 24, and the image of the image sensor unit 26 of u1.
  • the u coordinate of the corresponding point in the above is u2
  • X is the distance from the image sensor unit to the point P. Further, it is assumed that the image center positions of the two image sensor units 24 and 26 are equal. At this time, the distance X from the imaging sensor unit to the point P is obtained from the following equation.
  • the distance X from the imaging sensor unit to the obstacle point P depends on the parallax
  • the search for corresponding points may use a block matching method using a template matching method.
  • distance measurement detection of the position of an obstacle
  • the imaging sensor unit uses the imaging sensor unit.
  • Equations 3, 5, and 6, it can be seen that the position of the obstacle is estimated from the pixel position and the parallax.
  • I and j in Table 3 indicate pixel positions to be measured.
  • the vertical angle and the horizontal angle are the above-described elevation angle ⁇ and angle ⁇ measured rightward from the front reference line when viewed from the indoor unit. Respectively. That is, when viewed from the indoor unit, each pixel is set in the range of 5 to 80 degrees in the vertical direction and ⁇ 80 to 80 degrees in the horizontal direction, and the image sensor unit measures the parallax of each pixel.
  • the air conditioner performs distance measurement (detection of the position of an obstacle) by measuring parallax at each pixel from pixel [14,15] to pixel [142,105].
  • the detection range of the obstacle detection means at the start of the operation of the air conditioner may be limited to an elevation angle of 10 degrees or more. This is because the measurement data can be effectively used by measuring the distance only in the area where there is a high possibility that there is a person at the start of the operation of the air conditioner and there is a high possibility that the person will not be detected, that is, the area where the wall is located. (Since the person is not an obstacle, the data of the area where the person is present is not used as will be described later).
  • step S41 when it is determined that there is no person in the area corresponding to the current pixel (any one of the areas A to G shown in FIG. 13), the process proceeds to step S42 while it is determined that there is a person. If so, the process proceeds to step S43. That is, since the person is not an obstacle, the pixel corresponding to the area determined to have a person uses the previous distance data without performing distance measurement (does not update the distance data) and determines that there is no person. The distance measurement is performed only in the pixel corresponding to the region thus set, and the newly measured distance data is set to be used (distance data is updated).
  • an obstacle in each obstacle position determination area is determined according to the result of the presence / absence determination of a person in the person position determination area corresponding to each obstacle position determination area.
  • the presence / absence determination of an obstacle is efficiently performed. More specifically, in the obstacle position determination area belonging to the human position determination area determined that there is no person by the human body detection means, the previous determination result by the obstacle detection means is updated with a new determination result, In the obstacle position determination area belonging to the human position determination area determined to have a person by the human body detection means, the previous determination result by the obstacle detection means is not updated with a new determination result.
  • step S42 the above-described block matching method is used to calculate the parallax of each pixel, and the process proceeds to step S44.
  • step S44 data is acquired eight times with the same pixel, and it is determined whether distance measurement based on the acquired data is complete. If it is determined that distance measurement is not complete, the process proceeds to step S41. Return. Conversely, if it is determined in step S44 that the distance measurement has been completed, the process proceeds to step S45.
  • step S45 the reliability of the distance estimation is improved by evaluating the reliability. That is, if it is determined that there is reliability, a distance number determination process is performed in step S46. On the other hand, if it is determined that there is no reliability, a nearby distance number is processed as distance data of the pixel in step S47. .
  • the image sensor units 24 and 26 function as obstacle position detection means.
  • step S46 the distance number determination process in step S46 will be described. First, the term “distance number” will be described.
  • the “distance number” means an approximate distance from the image sensor unit to the position P where the air-conditioned space is located. As shown in FIG. 26, the image sensor unit is installed 2 m above the floor, and the image sensor Assuming that the distance from the unit to the position P is “distance corresponding to the distance number” X [m], the position P is expressed by the following equation.
  • the distance X corresponding to the distance number depends on the parallax between the image sensor units 24 and 26.
  • the distance number is an integer value from 2 to 12, and the distance corresponding to each distance number is set as shown in Table 4.
  • Table 4 shows the position of the position P corresponding to the elevation angle ( ⁇ ) determined by the v-coordinate value of each pixel according to each distance number and the number 2, and in the black part, h is negative. Value (h ⁇ 0), indicating the position to bite into the floor.
  • the position corresponding to the distance number ⁇ 10 is a position that exceeds the wall of the room with a diagonal distance> 4.50 m (a position outside the room), and is a distance number that has no meaning at all. Yes, in black.
  • Table 6 shows the limit value of the distance number set according to the capability rank of the air conditioner and the elevation angle of each pixel.
  • step S45 the reliability evaluation process in step S45 and the distance number determination process in step S46 will be described.
  • Determine the distance number for 8 times for each pixel, remove the two distance numbers in order from the largest, and remove the two distance numbers in order from the smallest, and take the average value of the remaining four distance numbers to determine the distance number.
  • the stereo method based on the block matching method when an obstacle having no luminance change is detected, the parallax calculation is not stable, and a parallax result (distance number) that differs greatly every time measurement is performed. Therefore, in step S45, the values of the remaining four distance numbers are compared, and if the variation is equal to or greater than the threshold value, in step S47, the distance number value is given as not reliable because the distance number value is not reliable.
  • the distance number estimated in the neighboring pixels is used.
  • the average value is an integer value obtained by rounding up the decimal point and quantizing, and the position corresponding to the distance number thus determined is as shown in Table 4 or Table 5.
  • the distance number is determined by taking an average value of the remaining four distance numbers except for two distance numbers, each of which is larger and smaller.
  • the number of distances determined for each pixel is not limited to eight, and the number of distances taking an average value is not limited to four.
  • an obstacle in each obstacle position determination area is determined according to the result of the presence / absence determination of a person in the person position determination area corresponding to each obstacle position determination area.
  • the presence / absence determination of an obstacle is efficiently performed. More specifically, in the obstacle position determination area belonging to the human position determination area determined that there is no person by the human body detection means, the previous determination result by the obstacle detection means is updated with a new determination result, In the obstacle position determination area belonging to the human position determination area determined to have a person by the human body detection means, the previous determination result by the obstacle detection means is not updated with a new determination result.
  • step S43 in the flowchart of FIG. 25 the previous distance data is used. However, since the previous data does not exist immediately after the installation of the air conditioner, each obstacle position determination area by the obstacle detection unit is used. When the determination is the first time, the default value is used, and the limit value (maximum value D) described above is used as the default value.
  • FIG. 27 is an elevation view (longitudinal sectional view passing through the image sensor unit) of a certain living space.
  • the floor surface is 2 m below the image sensor unit, and a table or the like is 0.7 to 1.1 m from the floor surface.
  • the measurement results when there is an obstacle are shown in the drawing.
  • the shaded area, the upward-sloping shaded part, and the downward-sloping shaded part are short distance, medium distance, and long distance (these distances will be described later) Are determined to have obstacles.
  • the areas A to G shown in FIG. 13 belong to the following blocks, respectively.
  • Block N Region A Block R: Regions B and E Block C: Regions C and F Block L: Regions D and G Regions A to G belong to the following fields, respectively.
  • Field 1 Area A Field 2: Regions B and D Field 3: Region C Field 4: Regions E and G Field 5: Region F Furthermore, the distance from the indoor unit is defined as follows.
  • Table 7 shows the target setting angles at the positions of the five left blades and the five right blades constituting the left and right blades 14, and the symbols attached to the numbers (angles) are as shown in FIG.
  • the case where the left or right blade is directed inward is defined as a plus (+, no symbol in Table 7) direction, and the case where it is directed outward is defined as a minus ( ⁇ ) direction.
  • the “heating B area” in Table 7 is a heating area where obstacle avoidance control is performed, and “normal automatic wind direction control” is wind direction control where obstacle avoidance control is not performed.
  • the determination as to whether or not to perform the obstacle avoidance control is based on the temperature of the indoor heat exchanger 6.
  • the temperature is low, the wind direction control is not applied to the occupant, and when it is too high, the maximum air volume position is determined. In the case of wind direction control and moderate temperature, wind direction control to the heating B area is performed.
  • “temperature is low”, “too high”, “wind direction control that does not apply wind to the occupant”, and “wind direction control at the maximum airflow position” have the following meanings.
  • -Low temperature The temperature of the indoor heat exchanger 6 is set to the skin temperature (33 to 34 ° C) as the optimum temperature, and a temperature that can be lower than this temperature (for example, 32 ° C).
  • -Too high temperature for example, 56 ° C or higher-Wind direction control that does not direct wind to the occupant: Wind direction control that causes the wind to flow along the ceiling by controlling the angle of the upper and lower blades 12 so as not to send wind to the living space -Wind direction control at the maximum airflow position: When the air conditioner bends the airflow with the upper and lower blades 12 and the left and right blades 14, resistance (loss) is always generated, so the maximum airflow position is the wind direction where the loss is close to zero. Control (in the case of the left and right blades 14, it is a position facing directly in front, and in the case of the upper and lower blades 12, it is a position facing downward 35 degrees from the horizontal)
  • Table 8 shows target setting angles in each field of the upper and lower blades 12 when performing obstacle avoidance control.
  • the upper blade angle ( ⁇ 1) and the lower blade angle ( ⁇ 2) are angles (elevation angles) measured upward from the vertical line.
  • the swinging motion is a swinging motion of the left and right blades 14, and basically swinging with a predetermined left-right angle width around one target position and having no fixed time at both ends of the swing. is there.
  • the position stop operation means that the target setting angle (angle in Table 7) of a certain position is corrected as shown in Table 9 to be the left end and the right end, respectively.
  • the left end and the right end each have a wind direction fixing time (time for fixing the left and right blades 14). For example, when the wind direction fixing time elapses at the left end, it moves to the right end and the wind direction fixing time elapses at the right end. The wind direction at the right end is maintained, and after the fixed time of the wind direction has passed, it moves to the left end and repeats it.
  • the wind direction fixing time is set to 60 seconds, for example.
  • the set angles of the left and right blades 14 corresponding to the left end and the right end of each block are determined based on, for example, Table 10.
  • the operation has a fixed wind direction at the left and right ends of each block.For example, when the fixed wind direction has elapsed at the left end, it moves to the right end and maintains the right wind direction until the fixed wind direction has elapsed at the right end. Then, after the elapse of the wind direction fixing time, it moves to the left end and repeats it.
  • the wind direction fixing time is set to 60 seconds, for example, similarly to the position stop operation. Since the left end and the right end of each block coincide with the left end and the right end of the person position determination area belonging to the block, the block stop operation can be said to be a stop operation of the person position determination area.
  • position stop operation and block stop operation are properly used according to the size of the obstacle.
  • the obstacles in front are small, the position is stopped around the position where there are obstacles to avoid obstacles and blow, whereas the obstacles in front are large, for example, in front of the area where people are When there is an obstacle, the air is blown over a wide range by performing a block stop operation.
  • the swing operation, the position stop operation, and the block stop operation are collectively referred to as the swing operation of the left and right blades 14.
  • the human body detection means determines that the person is only in a single region.
  • the air flow control is performed to control the upper and lower blades 12 to avoid the obstacle from above. ing.
  • the obstacle detection means determines that there is an obstacle in the obstacle position determination area belonging to the person position determination area determined that the person is detected by the human body detection means, the person position determination is determined that there is a person.
  • the left and right blades 14 are swung within at least one obstacle position determination region belonging to the region, and the fixing time of the left and right blades 14 is not provided at both ends of the swing range.
  • the left and right blades 14 are swung within at least one obstacle position determining region belonging to the person position determining region or the human position determining region adjacent to the region, and fixed times of the left and right blades 14 are provided at both ends of the swing range.
  • One of the two airflow controls is selected.
  • both the left blade and the right blade have 10 degrees. It continues to swing (swing) without stopping in the center at an angle range of ⁇ 10 degrees.
  • the timing of swinging the left and right blades to the left and right is set to be the same, and the swinging motions of the left and right blades are linked.
  • the first airflow control is performed by swinging the target setting angles of two positions without obstacles at both ends to basically air-condition a position without obstacles.
  • the block N is operated in a block stop and the second airflow control is performed. This is because the block stop operation is more directional and can reach far away than the entire area, and there is a high possibility of avoiding obstacles. That is, even when obstacles are scattered in the area A, there is usually a gap between the obstacles, and the air can be blown through the gap between the obstacles.
  • the first airflow control is performed by swinging left and right. For example, when there is a person in the region D and there is an obstacle only at the position D2, the swing operation is performed to the left and right around the target setting angle of the position D1.
  • the block including the area where the person is present is operated to stop the block and the second air flow control is performed.
  • the block L is operated while being stopped.
  • the first airflow control is performed by performing a swing operation around the target setting angle in a position where there is no obstacle in the middle distance region. For example, if there is a person in the area E and there is an obstacle at the position B2 and there are no obstacles on both sides, but there are obstacles behind it, it is advantageous to send airflow from the position B1 where there is no obstacle. .
  • the first airflow control is performed by swinging around the target setting angle of the position where there is no obstacle . For example, if there is a person in the area F, there is an obstacle in position C2, there is an obstacle in position D1 of both sides of position C2, and there is no obstacle in C1, the obstacles from position C1 to position C2 where there is no obstacle Airflow can be sent to area F while avoiding objects.
  • the block including the area where the person is present is operated in block stop to perform the second air flow control.
  • the block C is operated in a block stop state. In this case, since there is an obstacle ahead of the person and there is no way to avoid the obstacle, the block stop operation is performed regardless of whether there is an obstacle in the block adjacent to the block C.
  • the first airflow control is performed by swinging around the target setting angle of the other position where there is no obstacle. For example, if there is a person in the area F, there are no obstacles in the positions C1, C2, and F1, and there is an obstacle in the position F2, the front of the area F in which the person is present is open. Considering this, air conditioning is performed around the far-off position F1 without an obstacle.
  • ⁇ Human wall proximity control> When a person and a wall exist in the same area, the person is always located in front of the wall and close to the wall, and during heating, hot air tends to stay near the wall. Since the room temperature tends to be higher than the room temperature of other parts, human wall proximity control is performed.
  • the parallax is calculated in a pixel different from the pixel [i, j] shown in Table 4, the distance is detected, and the positions of the front wall and the left and right walls are first recognized.
  • the parallax of the pixel corresponding to the front in the substantially horizontal direction is calculated, and the distance to the front wall is measured to obtain the distance number. Further, the parallax of the pixel corresponding to the left side in the substantially horizontal direction is calculated, the distance to the left wall is measured, the distance number is obtained, and the distance number of the right wall is obtained similarly.
  • FIG. 29 is a top view of a room to which an indoor unit is attached, and shows a case where a front wall WC, a left wall WL, and a right wall WR exist on the front, left, and right sides as viewed from the indoor unit. ing.
  • the numbers on the left side of FIG. 29 indicate the distance numbers of the corresponding cells, and Table 12 indicates the distances from the indoor unit to the near and far points corresponding to the distance numbers.
  • the “obstacle” used in the present specification is assumed to be furniture such as a table and a sofa, a television, an audio, and the like. Since it is not detected in the angle range of 75 degrees and it can be estimated that it is a wall that is detected, in the present embodiment, the distance to the front, left end, and right end of the indoor unit is detected at an elevation angle of 75 degrees or more, It is assumed that there is a wall on the extension including the position.
  • the left wall WL is at the positions of ⁇ 80 degrees and ⁇ 75 degrees
  • the front wall WC is at the positions of ⁇ 15 degrees to 15 degrees
  • the right wall WR is at the angles of 75 degrees and 80 degrees. Therefore, among the pixels shown in Table 3, the pixels corresponding to the viewing angle in the horizontal direction within the elevation angle of 75 degrees are as follows.
  • the upper limit value and the lower limit value of each wall surface data are deleted to eliminate unnecessary wall surface data.
  • the front wall WC, left A distance number to the wall WL and the right wall WR is determined.
  • the maximum values in Table 14 (WC: 5, WL: 6, WR: 3) can be adopted.
  • a room large room
  • a wider space should be set as a target for air-conditioning control.
  • the temperature setting is lower than the setting temperature set by the remote control I do.
  • the set temperature is set to a low value by a first predetermined temperature (for example, 2 ° C.).
  • a first predetermined temperature for example, 2 ° C.
  • B. When a person is in a long-distance area Since the long-distance area is far from the indoor unit and has a large area, the degree of increase in room temperature is lower than that in the short-distance area or medium-distance area.
  • the set temperature is set to a low level by a second predetermined temperature (for example, 1 ° C.) lower than the first predetermined temperature.
  • the long-distance area has a large area, even if it is detected that there is a person and a wall in the same person position determination area, there is a possibility that the person and the wall are separated. Only in this case, the human wall proximity control is performed, and the temperature shift is performed according to the positional relationship between the person and the wall.
  • the wall and the obstacle can be controlled by controlling the detection order at the imaging sensor unit to be different between the wall detection and the obstacle detection. Obstacles can be detected efficiently and accurately.
  • the distance is measured in the following order when a wall is detected.
  • the stereo method is used as the distance detection means, but a method using the light projecting unit 28 and the image sensor unit 24 may be used instead of the stereo method. This method will be described.
  • the main body 2 of the present embodiment includes an image sensor unit 24 and a light projecting unit 28.
  • the light projecting unit 28 includes a light source and a scanning unit (not shown), and the light source may use an LED or a laser. Further, the scanning unit can change the light projecting direction arbitrarily using a galvanometer mirror or the like.
  • FIG. 31 is a schematic diagram showing the relationship between the image sensor unit 24 and the light projecting unit 28. Originally, the projection direction is a two-degree-of-freedom and the imaging surface is a vertical and horizontal two-dimensional plane.
  • the light projecting unit 28 projects light in the light projecting direction ⁇ with respect to the optical axis direction of the imaging sensor unit 24.
  • the image sensor unit 24 performs a difference process between the frame image immediately before the light projecting unit 28 projects light and the frame image being projected, thereby reflecting the light P projected by the light projecting unit 28.
  • the u coordinate u1 on the image is acquired.
  • distance information in the air-conditioned space can be obtained by detecting the reflection point P of the light while changing the light projecting direction ⁇ of the light projecting unit 28.
  • i and j indicate addresses to be scanned by the light projecting unit 28, and the vertical angle and the horizontal angle are set to the right from the elevation angle ⁇ and the reference line in front as viewed from the indoor unit.
  • Each measured angle ⁇ is shown. That is, when viewed from the indoor unit, each address is set in the range of 5 to 80 degrees in the vertical direction and -80 to 80 degrees in the horizontal direction, and the light projecting unit 28 measures each address and scans the living space. To do.
  • step S48 when it is determined that there is no person in the area (any one of areas A to G shown in FIG. 13) corresponding to the address [i, j] where the light projecting unit 28 performs light projection, If it is determined that there is a person while the process proceeds to step S49, the process proceeds to step S43. That is, since the person is not an obstacle, the pixel corresponding to the area determined to have a person uses the previous distance data without performing distance measurement (does not update the distance data) and determines that there is no person. The distance measurement is performed only in the pixel corresponding to the region thus set, and the newly measured distance data is set to be used (distance data is updated).
  • step S49 the distance to the obstacle is estimated by acquiring the above-described light projection process and the reflection point from the image sensor unit 24.
  • the distance number determination process may be used to perform the process using the distance number.
  • human body detection means may be used as distance detection means. This comprises a human body distance detecting means using human body detecting means and an obstacle detecting means using human body detecting means. This process will be described.
  • FIG. 34 is a flowchart showing the flow of processing of the human body distance detecting means using the human body detecting means.
  • the same steps as those in FIG. 5 are denoted by the same reference numerals, and detailed description thereof is omitted here.
  • step S201 the human body distance detection means detects a pixel that is present at the top of the image among the pixels in which the difference is generated in each area where the human body detection means has divided the area, and sets the v coordinate as v1. get.
  • the human body distance detection means estimates the distance from the image sensor unit to the person using v1, which is the v coordinate at the top of the image.
  • FIG. 35 is a schematic diagram for explaining this process.
  • FIG. 35A is a schematic diagram of a scene in which two persons 121 and 122 are present near and far from the camera
  • FIG. 35B is a difference image of images captured by the image sensor unit in the scene of FIG. Is shown.
  • the areas 123 and 124 where the difference occurs correspond to the persons 121 and 122, respectively.
  • the height h1 of the person is known and the heights of all the persons in the air-conditioned space are substantially equal.
  • the image sensor unit 24 since the image sensor unit 24 is installed at a height of 2 m, as shown in FIG. 35A, the image sensor unit captures an image while looking down from above the person. At this time, the closer the person is to the image sensor unit, the more the person is imaged in the lower part of the image as shown in FIG. That is, the v-coordinate v1 at the top of the image of the person and the distance from the image sensor unit to the person correspond one-to-one. From this, the human body distance detecting means using the human body detecting means can be performed by obtaining in advance the correspondence between the uppermost v coordinate v1 of the person and the distance from the imaging sensor unit to the person.
  • Table 17 shows an example in which the average height of a person is used as h1, and the correspondence between the v-coordinate v1 at the top of the image and the distance from the imaging sensor unit to the person is obtained in advance.
  • FIG. 36 is a flowchart showing the flow of processing of obstacle detection means using human body detection means.
  • step S203 the obstacle detection means estimates the height v2 of the person on the image using the distance information from the image sensor unit 24 to the person estimated by the human body distance detection means.
  • FIG. 37 is a schematic diagram for explaining this processing, and is a schematic diagram showing a scene similar to FIG.
  • the height h1 of the person is known as described above, and the heights of all persons in the air-conditioned space are substantially equal.
  • the image sensor unit 24 since the image sensor unit 24 is installed at a height of 2 m, as shown in FIG. 34 (a), the image sensor unit performs imaging while looking down from above the person. At this time, the closer the person is to the image sensor unit 24, the larger the size of the person on the image as shown in FIG.
  • the difference v2 between the v-coordinate at the top of the image and the v-coordinate at the bottom of the image has a one-to-one correspondence with the distance from the image sensor unit 24 to the person. From this, when the distance from the image sensor unit to the person is known, the size on the image can be estimated. This can be done by obtaining in advance the correspondence between the difference v2 between the v-coordinate at the top of the image and the v-coordinate at the bottom of the image and the distance from the image sensor unit to the person.
  • step S204 the obstacle detection means detects the pixel having the highest difference at the top of the image and the pixel having the lowest difference at the bottom of the image in each region of the difference image.
  • the difference v3 is calculated.
  • step S205 the person's height v2 on the image estimated using the distance information from the image sensor unit 24 to the person is compared with the person's height v3 obtained from the actual difference image, thereby capturing an image. It is estimated whether there is an obstacle between the sensor unit 24 and the person.
  • 38 and 39 are schematic diagrams for explaining this process.
  • FIG. 38 shows a scene similar to FIG. 35, and is a schematic diagram showing a scene where no obstacle exists between the image sensor unit 24 and a person.
  • FIG. 39 is a schematic diagram showing a scene where an obstacle exists.
  • FIG. 38 shows a scene similar to FIG. 35, and is a schematic diagram showing a scene where no obstacle exists between the image sensor unit 24 and a person.
  • FIG. 39 is a schematic diagram showing a scene where an obstacle exists.
  • FIG. 38 shows a scene similar to FIG. 35, and is a schematic diagram showing a scene where no obstacle exists between the image sensor unit 24 and a person.
  • FIG. 39 is a schematic diagram showing
  • step S205 when it is determined in step S205 that v3 is sufficiently smaller than v2, the process proceeds to step S206, and it is determined that there is an obstacle between the imaging sensor unit and the person. At this time, the distance between the imaging sensor unit and the obstacle is assumed to be equal to the distance from the imaging sensor unit to the person obtained from the uppermost v coordinate v1.
  • the distance detection means is realized by using the detection result of the human body detection means.
  • a subdivided human position determination area and an obstacle position determination area are provided, and the suction temperature of the indoor unit is controlled according to the detected positional relationship between the wall and the person.
  • a technique for detecting the positions of people and walls in the area to be air-conditioned without using the subdivided person position determination area and obstacle position determination area is well known. From this detected position of the person and the wall, it is determined whether the person and the wall are less than a predetermined distance (or less), and if an affirmative determination is obtained, the suction temperature of the indoor unit is controlled. It doesn't matter.
  • the air conditioner according to the present invention has the effect of enabling energy-saving operation while realizing a comfortable air-conditioned space, and is useful as various air conditioners including general home air conditioners.
  • 2 indoor unit body 2a front opening, 2b top opening, 4 movable front panel, 6 heat exchanger, 8 indoor fan, 10 air outlets, 12 top and bottom blades, 14 left and right blades, 16 Filter, 18, 20 Front panel arm, 24, 26 Image sensor unit, 28 Projection unit.

Abstract

An air conditioner provided with an image-capturing device for detecting whether or not an obstacle is present, wherein the image-capturing device determines, separately from walls present around a region to be air conditioned, whether or not an obstacle is present in each of divided, obstacle position-determining regions.

Description

空気調和機Air conditioner
 本発明は、室内機に障害物の有無を検知する障害物検知装置を設け、障害物検知装置の検知結果に基づいて風向変更羽根等を制御して空調風を効率的に送出するようにした空気調和機に関するものである。 The present invention provides an indoor unit with an obstacle detection device that detects the presence or absence of an obstacle, and controls the wind direction change blades and the like based on the detection result of the obstacle detection device to efficiently send conditioned air. It relates to air conditioners.
 従来の空気調和機は、室内機が設置された室内の形態と室内機の設置位置とを検知し、検知した室内の形態と室内機の設置位置に基づいて風向、風量等を制御して空調運転を効率的に行っている。 The conventional air conditioner detects the indoor form in which the indoor unit is installed and the installation position of the indoor unit, and controls the air direction, the air volume, etc. based on the detected indoor form and the installed position of the indoor unit. Driving efficiently.
 このものにあっては、室内機に左右の距離検知センサと、正面及び下側距離検知センサとを設け、右側距離検知センサにより室内機と右側壁との距離を測定し、左側距離検知センサにより室内機と左側壁との距離を測定するとともに、下側距離検知センサにより室内機の設置高さを測定することで、室内機の設置位置を認識している。 In this, the left and right distance detection sensors and the front and lower distance detection sensors are provided in the indoor unit, the distance between the indoor unit and the right wall is measured by the right distance detection sensor, and the left distance detection sensor is used. While measuring the distance between the indoor unit and the left side wall, the installation position of the indoor unit is recognized by measuring the installation height of the indoor unit with the lower distance detection sensor.
 さらに、右側距離検知センサにより検知された室内機と右側壁との距離と、左側距離検知センサにより検知された室内機と左側壁との距離と、正面距離検知センサにより検知された室内機と正面壁との距離を測定して、室内の形態を認識している。また、このようにして認識した室内機の設置位置及び室内の形態に応じて、風向変更羽根と室内ファンを制御することで、空調制御を効率的に行っている(例えば、特許文献1参照)。 Furthermore, the distance between the indoor unit and the right wall detected by the right distance detection sensor, the distance between the indoor unit and the left wall detected by the left distance detection sensor, and the indoor unit and the front detected by the front distance detection sensor. The distance from the wall is measured to recognize the indoor form. In addition, air conditioning control is efficiently performed by controlling the wind direction changing blade and the indoor fan according to the installation position of the indoor unit and the indoor form recognized in this way (see, for example, Patent Document 1). .
特許第2723470号公報Japanese Patent No. 2723470
 しかしながら、室内機の設置位置及び室内の形態に応じて風向変更羽根と室内ファンを制御するだけでは快適な空調空間を実現するにはまだまだ不十分で、室内には様々な障害物が存在することから、障害物の位置を検知して空調制御を行う必要がある。 However, it is still insufficient to realize a comfortable air-conditioned space by simply controlling the wind direction changing blade and the indoor fan according to the installation position of the indoor unit and the indoor form, and there are various obstacles in the room. Therefore, it is necessary to control the air conditioning by detecting the position of the obstacle.
 本発明は、従来技術の有するこのような問題点に鑑みてなされたものであり、空調すべき領域の障害物の位置を正確に認識することにより気流制御の幅を広げ、快適な空調空間を実現することができる空気調和機を提供することを目的としている。 The present invention has been made in view of the above-described problems of the prior art, and by accurately recognizing the position of an obstacle in the area to be air-conditioned, the width of the air flow control is widened, and a comfortable air-conditioned space is created. The object is to provide an air conditioner that can be realized.
 上記目的を達成するため、本発明は、障害物の有無を検知する撮像装置を設けた空気調和機であって、空調すべき領域を複数の障害物位置判別領域に分割し、撮像装置は、分割された各障害物位置判別領域における障害物の有無判定を、空調すべき領域の周囲に存在する壁とは区別して行うようにしている。 In order to achieve the above object, the present invention is an air conditioner provided with an imaging device that detects the presence or absence of an obstacle, and divides a region to be air-conditioned into a plurality of obstacle position determination regions. The obstacle presence / absence determination in each of the divided obstacle position determination areas is performed separately from the walls existing around the area to be air-conditioned.
 また、本発明は、人の在否と、障害物の有無を検知する撮像装置を設けた空気調和機であって、撮像装置は人の検知結果が否の場合のみ障害物の有無を検知し、人の検知結果が在の場合、障害物の有無を検知しない。 The present invention is also an air conditioner provided with an imaging device that detects the presence or absence of a person and the presence or absence of an obstacle, and the imaging device detects the presence or absence of an obstacle only when the detection result of the person is negative. If there is a person detection result, the presence or absence of an obstacle is not detected.
 本発明は、上記構成により、室内に存在する様々な障害物の位置を正確に認識することができ、気流制御の幅が広がり、快適な空調空間を実現することができる。 The present invention can accurately recognize the positions of various obstacles existing in the room by the above-described configuration, and can widen the range of airflow control and realize a comfortable air-conditioned space.
図1は本発明に係る空気調和機の室内機の正面図FIG. 1 is a front view of an indoor unit of an air conditioner according to the present invention. 図2は図1の室内機の縦断面図2 is a longitudinal sectional view of the indoor unit of FIG. 図3は可動前面パネルが前面開口部を開放するとともに、上下羽根が吹出口を開放した状態の図1の室内機の縦断面図3 is a longitudinal sectional view of the indoor unit of FIG. 1 with the movable front panel opening the front opening and the upper and lower blades opening the outlet. 図4は上下羽根を構成する下羽根を下向きに設定した状態の図1の室内機の縦断面図4 is a longitudinal sectional view of the indoor unit of FIG. 1 in a state where the lower blades constituting the upper and lower blades are set downward. 図5は本実施形態における人位置推定の処理の流れを示したフローチャートFIG. 5 is a flowchart showing the flow of the human position estimation process in this embodiment. 図6は本実施形態における人位置推定における背景差分処理を説明するための模式図FIG. 6 is a schematic diagram for explaining background difference processing in human position estimation in the present embodiment. 図7は背景差分処理において背景画像を作成する処理を説明するための模式図FIG. 7 is a schematic diagram for explaining processing for creating a background image in background difference processing. 図8は背景差分処理において背景画像を作成する処理を説明するための模式図FIG. 8 is a schematic diagram for explaining processing for creating a background image in background difference processing. 図9は背景差分処理において背景画像を作成する処理を説明するための模式図FIG. 9 is a schematic diagram for explaining a process of creating a background image in the background difference process 図10は本実施形態における人位置推定における領域分割処理を説明するための模式図FIG. 10 is a schematic diagram for explaining region division processing in human position estimation in the present embodiment. 図11は本実施形態で利用する2つの座標系を説明するための模式図FIG. 11 is a schematic diagram for explaining two coordinate systems used in this embodiment. 図12は撮像センサユニットから人物の重心位置までの距離を示す概略図FIG. 12 is a schematic diagram showing the distance from the image sensor unit to the position of the center of gravity of the person. 図13は人体検知手段を構成する撮像センサユニットで検知される人位置判別領域を示す概略図FIG. 13 is a schematic diagram showing a human position determination area detected by the image sensor unit constituting the human body detection means. 図14は人体検知手段を構成する撮像センサユニットで検知される人位置判別領域において人物が存在する場合の模式図FIG. 14 is a schematic diagram in the case where a person is present in the human position determination area detected by the imaging sensor unit constituting the human body detection means. 図15は図13に示される各領域に領域特性を設定するためのフローチャートFIG. 15 is a flowchart for setting region characteristics in each region shown in FIG. 図16は図13に示される各領域における人の在否を最終的に判定するフローチャートFIG. 16 is a flowchart for finally determining the presence or absence of a person in each area shown in FIG. 図17は図1の室内機が設置された住居の概略平面図17 is a schematic plan view of a residence where the indoor unit of FIG. 1 is installed. 図18は図17の住居における各撮像センサユニットの長期累積結果を示すグラフ18 is a graph showing the long-term cumulative result of each image sensor unit in the residence of FIG. 図19は図1の室内機が設置された別の住居の概略平面図19 is a schematic plan view of another residence in which the indoor unit of FIG. 1 is installed. 図20は図19の住居における各撮像センサユニットの長期累積結果を示すグラフFIG. 20 is a graph showing the long-term cumulative result of each image sensor unit in the residence of FIG. 図21はフレーム画像から人らしい領域を抽出する処理を利用した人位置推定の処理の流れを示したフローチャートFIG. 21 is a flowchart showing the flow of a human position estimation process using a process of extracting a person-like area from a frame image. 図22はフレーム画像から顔らしい領域を抽出する処理を利用した人位置推定の処理の流れを示したフローチャートFIG. 22 is a flowchart showing the flow of human position estimation processing using processing for extracting a face-like region from a frame image. 図23は障害物検知手段で検知される障害物位置判別領域を示す概略図FIG. 23 is a schematic diagram showing an obstacle position determination area detected by the obstacle detection means. 図24はステレオ法による障害物検出を説明するための模式図FIG. 24 is a schematic diagram for explaining obstacle detection by the stereo method. 図25は障害物までの距離測定の処理の流れを示したフローチャートFIG. 25 is a flowchart showing the flow of processing for measuring the distance to the obstacle. 図26は撮像センサユニットから位置Pまでの距離を示す概略図FIG. 26 is a schematic diagram showing the distance from the image sensor unit to the position P. 図27はある居住空間の立面図であり、障害物検知手段の測定結果を示す概略図FIG. 27 is an elevation view of a living space, and is a schematic diagram showing the measurement results of the obstacle detection means. 図28は左右羽根を構成する左羽根と右羽根の各ポジションにおける風向の定義を示す概略図FIG. 28 is a schematic diagram showing the definition of the wind direction at each position of the left and right blades constituting the left and right blades. 図29は室内機から周囲の壁面までの距離を測定して距離番号を求めるための壁検知アルゴリズムを説明するための部屋の概略平面図FIG. 29 is a schematic plan view of a room for explaining a wall detection algorithm for determining a distance number by measuring a distance from an indoor unit to a surrounding wall surface. 図30は本発明に係る別の空気調和機の室内機の正面図FIG. 30 is a front view of another indoor unit of an air conditioner according to the present invention. 図31は撮像センサユニットと投光部の関係を示した模式図FIG. 31 is a schematic diagram showing the relationship between the image sensor unit and the light projecting unit. 図32は投光部と撮像センサユニットを利用した、障害物までの距離測定の処理の流れを示したフローチャートFIG. 32 is a flowchart showing the flow of processing for measuring the distance to an obstacle using the light projecting unit and the image sensor unit. 図33は本発明に係る別の空気調和機の室内機の正面図FIG. 33 is a front view of another indoor unit of an air conditioner according to the present invention. 図34は人体検知手段を利用した人体距離検出手段の処理の流れを示したフローチャートFIG. 34 is a flowchart showing the flow of processing of the human body distance detecting means using the human body detecting means. 図35は最も画像上部のv座標であるv1を利用して、撮像センサユニットから人物までの距離を推定する処理を説明するための模式図FIG. 35 is a schematic diagram for explaining processing for estimating the distance from the image sensor unit to a person using v1 which is the v coordinate at the top of the image. 図36は人体検知手段を利用した障害物検出手段の処理の流れを示したフローチャートFIG. 36 is a flowchart showing the flow of processing of obstacle detection means using human body detection means. 図37は人体距離検出手段が推定した撮像センサユニットから人物までの距離情報を利用して、画像上での人物の高さv2を推定する処理を説明するための模式図FIG. 37 is a schematic diagram for explaining the process of estimating the height v2 of the person on the image using the distance information from the imaging sensor unit to the person estimated by the human body distance detection means. 図38は撮像センサユニットと人物の間に障害物が存在するかどうかを推定する処理を説明するための模式図FIG. 38 is a schematic diagram for explaining processing for estimating whether an obstacle exists between the image sensor unit and a person. 図39は撮像センサユニットと人物の間に障害物が存在するかどうかを推定する処理を説明するための模式図FIG. 39 is a schematic diagram for explaining processing for estimating whether an obstacle exists between the image sensor unit and a person.
 本発明は、室内機に、障害物の有無を検知する障害物検知手段を備えた撮像装置を設け、障害物検知手段の検知信号に基づいて室内機に設けられた風向変更手段を制御する空気調和機であって、撮像装置は、障害物の有無判定を、室内の状況に応じて行う。 The present invention provides an indoor unit provided with an imaging device provided with an obstacle detection means for detecting the presence or absence of an obstacle, and controls air direction changing means provided in the indoor unit based on a detection signal of the obstacle detection means. It is a harmony machine, Comprising: An imaging device performs the presence or absence determination of an obstruction according to the condition of a room.
 具体的には、空調すべき領域は複数の障害物位置判別領域に分割され、撮像装置は、分割された各障害物位置判別領域における障害物の有無判定を、空調すべき領域の周囲に存在する壁とは区別して行うことで、気流制御の幅が広がり、快適な空調空間を実現することができる。 Specifically, the area to be air-conditioned is divided into a plurality of obstacle position determination areas, and the imaging device determines whether there is an obstacle in each of the divided obstacle position determination areas around the area to be air-conditioned. By distinguishing from the wall to be performed, the range of airflow control is widened, and a comfortable air-conditioned space can be realized.
 また、具体的には、複数の分割領域のうち、左右の壁の部分を障害物検知対象領域から外すことで、壁と区別した障害物検知を容易に行うことができる。 Also, specifically, by removing the left and right wall portions from the obstacle detection target area among the plurality of divided areas, it is possible to easily detect the obstacle distinguished from the wall.
 また、他の態様において、撮像装置は、さらに人の在否を検知する人体検知手段を兼ね備え、人体検知手段の検知信号及び障害物検知手段の検知信号に基づいて室内機に設けられた風向変更手段を制御するものとし、人体検知手段による人の検知結果が否の場合、人体検知手段により障害物の有無を検知し、人体検知手段による人の検知結果が在の場合、人体検知手段により障害物の有無を検知しない。これにより、人を障害物として認識することなく、障害物の位置を素早く正確に認識することができる。そのため、気流制御の幅を広げ、快適な空調空間を実現することができる。 In another aspect, the imaging apparatus further includes a human body detection unit that detects the presence or absence of a person, and a wind direction change provided in the indoor unit based on the detection signal of the human body detection unit and the detection signal of the obstacle detection unit If the human detection result by the human body detection means is negative, the human body detection means detects the presence or absence of an obstacle, and if the human detection result by the human body detection means is present, the human body detection means Does not detect the presence or absence of objects. Thereby, the position of an obstacle can be recognized quickly and accurately without recognizing a person as an obstacle. Therefore, the range of airflow control can be expanded and a comfortable air-conditioned space can be realized.
 以下、本発明の実施の形態について、図面を参照しながら説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
<空気調和機の全体構成>
 一般家庭で使用される空気調和機は、通常冷媒配管で互いに接続された室外機と室内機とで構成されており、図1乃至図4は、本発明に係る空気調和機の室内機を示している。
<Overall configuration of air conditioner>
An air conditioner used in a general home is composed of an outdoor unit and an indoor unit that are usually connected to each other by refrigerant piping. FIGS. 1 to 4 show the indoor unit of an air conditioner according to the present invention. ing.
 室内機は、本体2と、本体2の前面開口部2aを開閉自在の可動前面パネル(以下、単に前面パネルという)4を有しており、空気調和機停止時は、前面パネル4は本体2に密着して前面開口部2aを閉じているのに対し、空気調和機運転時は、前面パネル4は本体2から離反する方向に移動して前面開口部2aを開放する。なお、図1及び図2は前面パネル4が前面開口部2aを閉じた状態を示しており、図3及び図4は前面パネル4が前面開口部2aを開放した状態を示している。 The indoor unit has a main body 2 and a movable front panel (hereinafter simply referred to as a front panel) 4 that can freely open and close the front opening 2a of the main body 2, and the front panel 4 is the main body 2 when the air conditioner is stopped. While the front opening 2a is closed in close contact with the front, the front panel 4 moves in a direction away from the main body 2 to open the front opening 2a during operation of the air conditioner. 1 and 2 show a state where the front panel 4 closes the front opening 2a, and FIGS. 3 and 4 show a state where the front panel 4 opens the front opening 2a.
 図1乃至図4に示されるように、本体2の内部には、熱交換器6と、前面開口部2a及び上面開口部2bから取り入れられた室内空気を熱交換器6で熱交換して室内に吹き出すための室内ファン8と、熱交換した空気を室内に吹き出す吹出口10を開閉するとともに空気の吹き出し方向を上下に変更する上下羽根12と、空気の吹き出し方向を左右に変更する左右羽根14とを備えており、前面開口部2a及び上面開口部2bと熱交換器6との間には、前面開口部2a及び上面開口部2bから取り入れられた室内空気に含まれる塵埃を除去するためのフィルタ16が設けられている。 As shown in FIG. 1 to FIG. 4, inside the main body 2, the heat exchanger 6 and the indoor air taken in from the front opening 2 a and the upper opening 2 b are heat-exchanged by the heat exchanger 6 and are indoors. An indoor fan 8 for blowing air into and out, an upper and lower blade 12 for opening and closing the air outlet 10 for blowing heat-exchanged air into the room and changing the air blowing direction up and down, and a left and right blade 14 for changing the air blowing direction left and right Between the front opening 2a and the upper surface opening 2b and the heat exchanger 6 for removing dust contained in the indoor air taken in from the front opening 2a and the upper surface opening 2b. A filter 16 is provided.
 また、前面パネル4上部は、その両端部に設けられた2本のアーム18,20を介して本体2上部に連結されており、アーム18に連結された駆動モータ(図示せず)を駆動制御することで、空気調和機運転時、前面パネル4は空気調和機停止時の位置(前面開口部2aの閉塞位置)から前方斜め上方に向かって移動する。 Further, the upper part of the front panel 4 is connected to the upper part of the main body 2 via two arms 18 and 20 provided at both ends thereof, and a drive motor (not shown) connected to the arm 18 is driven and controlled. Thus, during operation of the air conditioner, the front panel 4 moves forward and obliquely upward from the position when the air conditioner is stopped (closed position of the front opening 2a).
 さらに、上下羽根12は、上羽根12aと下羽根12bとで構成されており、それぞれ本体2下部に揺動自在に取り付けられている。上羽根12a及び下羽根12bは、別々の駆動源(例えば、ステッピングモータ)に連結されており、室内機に内蔵された制御装置(例えばマイコン)によりそれぞれ独立して角度制御される。また、図3及び図4から明らかなように、下羽根12bの変更可能な角度範囲は、上羽根12aの変更可能な角度範囲より大きく設定されている。 Furthermore, the upper and lower blades 12 are composed of an upper blade 12a and a lower blade 12b, and are respectively swingably attached to the lower portion of the main body 2. The upper blade 12a and the lower blade 12b are connected to separate driving sources (for example, stepping motors), and are independently angle-controlled by a control device (for example, a microcomputer) built in the indoor unit. As is clear from FIGS. 3 and 4, the changeable angle range of the lower blade 12b is set larger than the changeable angle range of the upper blade 12a.
 なお、上羽根12a及び下羽根12bの駆動方法については後述する。また、上下羽根12は3枚以上の上下羽根で構成することも可能で、この場合、少なくとも2枚(特に、最も上方に位置する羽根と最も下方に位置する羽根)は独立して角度制御できるのが好ましい。 The driving method of the upper blade 12a and the lower blade 12b will be described later. In addition, the upper and lower blades 12 can be composed of three or more upper and lower blades. In this case, at least two (particularly, the uppermost blade and the lowermost blade) can be independently angle-controlled. Is preferred.
 また、左右羽根14は、室内機の中心から左右に5枚ずつ配置された合計10枚の羽根で構成されており、それぞれ本体2の下部に揺動自在に取り付けられている。また、左右の5枚を一つの単位として別々の駆動源(例えば、ステッピングモータ)に連結されており、室内機に内蔵された制御装置により左右5枚の羽根がそれぞれ独立して角度制御される。なお、左右羽根14の駆動方法についても後述する。 Further, the left and right blades 14 are composed of a total of 10 blades arranged five by left and right from the center of the indoor unit, and are respectively swingably attached to the lower part of the main body 2. The left and right five blades are connected to separate drive sources (for example, stepping motors) as a unit, and the left and right five blades are independently angle-controlled by a control device built in the indoor unit. . A method for driving the left and right blades 14 will also be described later.
<人体検知手段の構成>
 図1に示されるように、前面パネル4の上部には、撮像センサユニット24が撮像装置として取り付けられており、撮像センサユニット24はセンサホルダに保持されている。
<Configuration of human body detection means>
As shown in FIG. 1, an imaging sensor unit 24 is attached as an imaging device to the upper part of the front panel 4, and the imaging sensor unit 24 is held by a sensor holder.
 撮像センサユニット24は、回路基板と、回路基板に取り付けられたレンズと、レンズの内部に実装された撮像センサとで構成されている。また、人体検知手段は、例えば後述する差分処理に基づいて回路基板により人の在否が判定される。すなわち、回路基板は人の在否判定を行う在否判定手段として作用する。 The imaging sensor unit 24 includes a circuit board, a lens attached to the circuit board, and an imaging sensor mounted inside the lens. Further, the human body detecting means determines the presence or absence of a person by a circuit board based on, for example, a difference process described later. That is, the circuit board acts as presence / absence determination means for determining the presence / absence of a person.
<撮像センサユニットによる人位置推定>
 撮像センサユニット24による人位置推定を行うために、公知の技術である差分法を利用する。これは、人物が存在しない画像である背景画像と、撮像センサユニット24が撮像した画像の差分処理を行ない、差分が生じている領域には、人物が存在していると推定するものである。
<Estimation of human position by image sensor unit>
In order to estimate the human position by the image sensor unit 24, a difference method which is a known technique is used. This is to perform a difference process between a background image that is an image in which no person is present and an image captured by the image sensor unit 24, and to estimate that a person is present in a region where the difference is generated.
 図5は、本実施形態における人位置推定の処理の流れを示したフローチャートである。ステップS101において、背景差分処理を利用することで、フレーム画像内で差分が生じている画素を検出する。背景差分処理とは、特定の条件下で撮影した背景画像と、背景画像と撮像センサユニット24の視野や視点、焦点距離などの撮像条件が等しい状況で撮像した撮像画像を比較することで、背景画像には存在していないが、撮像画像には存在する物体を検出する手法である。人物の検出を行なうためには、背景画像として人物が存在しない画像を作成する。 FIG. 5 is a flowchart showing the flow of human position estimation processing in the present embodiment. In step S101, a background difference process is used to detect pixels that have a difference in the frame image. Background difference processing is a comparison between a background image captured under a specific condition and a captured image captured under the same imaging conditions such as the background image and the field of view, viewpoint, and focal length of the imaging sensor unit 24. This is a technique for detecting an object that does not exist in the image but exists in the captured image. In order to detect a person, an image without a person is created as a background image.
 図6は、背景差分処理を説明するための模式図である。図6(a)は、背景画像を示している。ここで、視野は空気調和機の空調空間とほぼ等しくなるように設定されている。この図において、101は空調空間内に存在する窓を、102は扉を示している。図6(b)は、撮像センサユニット24によって撮像されたフレーム画像を示している。ここで、撮像センサユニット24の視野や視点、焦点距離などは図6(a)の背景画像と等しい。103は、空調空間内に存在する人物を示している。背景差分処理では、図6(a)と図6(b)の差分画像を作成することにより、人物を検出する。図6(c)は差分画像を示しており、白い画素は差分が存在しない画素、黒い画素は差分が生じている画素を示している。背景画像には存在しないが、撮像されたフレーム画像には存在する人物103の領域が、差分が生じている領域104として、検出されていることがわかる。つまり、差分画像から差分が生じている領域を抽出することで、人物領域を検出することが可能である。 FIG. 6 is a schematic diagram for explaining the background difference processing. FIG. 6A shows a background image. Here, the visual field is set to be substantially equal to the air-conditioned space of the air conditioner. In this figure, 101 indicates a window existing in the air-conditioned space, and 102 indicates a door. FIG. 6B shows a frame image captured by the image sensor unit 24. Here, the field of view, the viewpoint, the focal length, and the like of the image sensor unit 24 are equal to the background image of FIG. Reference numeral 103 denotes a person existing in the air-conditioned space. In the background difference process, a person is detected by creating a difference image shown in FIGS. 6A and 6B. FIG. 6C shows a difference image. White pixels indicate pixels where no difference exists, and black pixels indicate pixels where a difference occurs. It can be seen that the area of the person 103 that is not present in the background image but is present in the captured frame image is detected as the area 104 where the difference occurs. That is, it is possible to detect a person area by extracting an area where a difference is generated from the difference image.
 また、前述の背景画像は、フレーム間差分処理を利用することで、作成することが可能である。図7~図9は、この処理を説明するための模式図である。図7(a)~(c)は、人物103が窓101の前を右から左に移動しているシーンにおいて、撮像センサユニット24によって撮像された連続した3フレームの画像を示した模式図である。図7(b)は図7(a)の次のフレームの画像を、図7(c)は図7(b)の次のフレームの画像を示している。また、図8(a)~(c)は、図7の画像を利用して、フレーム間差分処理を行なった、フレーム間差分画像を示している。白い画素は差分が存在しない画素、黒い画素105は差分が生じている画素を示している。ここで、視野内で移動している物体は人物のみであるとすると、フレーム間差分画像において、差分が生じない領域には、人物が存在していないと考えられる。そこで、フレーム間差分が生じない領域においては、背景画像を現在のフレームの画像と置き換える。この処理をすることで、背景画像を自動的に作成することができる。図9(a)~(c)は、それぞれ図7(a)~(c)の各フレームにおける背景画像の更新を模式的に示した図である。斜線で示した領域106は、背景画像を更新した領域、黒領域107はまだ背景画像が作成されていない領域、白領域108は背景画像が更新されなかった領域を示している。つまり、図9の黒領域107と白領域108の合計領域が、図8の黒色領域と等しくなる。図に示したとおり、人物が動いている場合、黒領域107が徐々に小さくなり、自動的に背景画像が作成されていることがわかる。 In addition, the above-described background image can be created by using inter-frame difference processing. 7 to 9 are schematic diagrams for explaining this processing. FIGS. 7A to 7C are schematic diagrams showing three consecutive frames of images taken by the imaging sensor unit 24 in a scene in which the person 103 is moving from right to left in front of the window 101. FIG. is there. FIG. 7B shows an image of the next frame of FIG. 7A, and FIG. 7C shows an image of the next frame of FIG. 7B. 8A to 8C show inter-frame difference images obtained by performing inter-frame difference processing using the image of FIG. White pixels indicate pixels where no difference exists, and black pixels 105 indicate pixels where a difference occurs. Here, if the only object moving within the field of view is a person, it is considered that no person exists in a region where no difference occurs in the inter-frame difference image. Therefore, in a region where no inter-frame difference occurs, the background image is replaced with the current frame image. By performing this process, a background image can be automatically created. FIGS. 9A to 9C are diagrams schematically showing the update of the background image in each frame of FIGS. 7A to 7C. A hatched area 106 indicates an area where the background image has been updated, a black area 107 indicates an area where a background image has not yet been created, and a white area 108 indicates an area where the background image has not been updated. That is, the total area of the black area 107 and the white area 108 in FIG. 9 is equal to the black area in FIG. As shown in the figure, when the person is moving, it can be seen that the black area 107 is gradually reduced and a background image is automatically created.
 次に、ステップS102において、求まった差分領域を領域分割することにより、複数の人物が存在している場合には、複数の差分領域として分割する。これは、公知の画像クラスタリング手法を利用すればよく、例えば、「差分が生じている画素と、その近傍に存在する差分が生じている画素は、同一の領域である」というルールに従って、差分画像を領域分割していけばよい。図10は、この領域分割処理を実行した模式図である。図10(a)は差分処理により計算された差分画像を示し、111および112の黒色画素が、差分の生じている画素である。図10(b)は、差分画像として図10(a)が得られたとき、前記「差分が生じている画素と、その近傍に存在する差分が生じている画素は、同一の領域である」というルールに従って領域分割を行なった結果を示している。ここで、横縞領域113と縦縞領域114は、別の領域であると判断されている。このとき、画像処理で広く利用されているモルフォロジー処理などのノイズ除去処理を行なってもかまわない。 Next, in step S102, the obtained difference area is divided into areas, and if there are a plurality of persons, the difference areas are divided into a plurality of difference areas. This can be done by using a known image clustering method. For example, the difference image is determined according to the rule that “the pixel in which the difference occurs and the pixel in which the difference exists in the vicinity are the same region”. Can be divided into regions. FIG. 10 is a schematic diagram in which this area division processing is executed. FIG. 10A shows a difference image calculated by the difference process, and black pixels 111 and 112 are pixels in which a difference occurs. FIG. 10B shows that when FIG. 10A is obtained as a difference image, “the pixel where the difference is generated and the pixel where the difference existing in the vicinity is generated are the same region”. The result of area division according to the rule is shown. Here, it is determined that the horizontal stripe region 113 and the vertical stripe region 114 are different regions. At this time, noise removal processing such as morphological processing widely used in image processing may be performed.
 次に、ステップS103において、求まった各領域の重心位置を計算することにより、検出された人物の位置を検出する。画像の重心位置から人物の位置を検出するためには、透視投影変換を利用すればよい。 Next, in step S103, the position of the detected person is detected by calculating the position of the center of gravity of each obtained area. In order to detect the position of the person from the position of the center of gravity of the image, perspective projection conversion may be used.
 透視投影変換を説明するために、2つの座標系を説明する。図11は、2つの座標系を説明するための模式図である。まず、画像座標系を考える。これは、撮像された画像における2次元の座標系であり、画像の左上の画素を原点、右方向にu、下方向にvとする。次にカメラを基準とした3次元の座標系であるカメラ座標系を考える。これは、撮像センサユニット24の焦点位置を原点、撮像センサユニット24の光軸方向をZc,カメラ上向きをYc,カメラ左方向をXcとする。このとき、透視投影変換により、以下の関係が成り立つ。 In order to explain perspective projection transformation, two coordinate systems will be explained. FIG. 11 is a schematic diagram for explaining two coordinate systems. First, consider the image coordinate system. This is a two-dimensional coordinate system in the captured image, where the upper left pixel of the image is the origin, u is rightward, and v is downward. Next, consider a camera coordinate system which is a three-dimensional coordinate system based on the camera. In this case, the focal position of the image sensor unit 24 is the origin, the optical axis direction of the image sensor unit 24 is Zc, the camera upward direction is Yc, and the camera left direction is Xc. At this time, the following relationship is established by perspective projection conversion.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 ここで、fは焦点距離[mm]、(u0,v0)は画像座標上での画像中心[Pixel]、(dpx,dpy)は撮像素子1画素の大きさ[mm/Pixel]を示している。ここで、Xc,Yc,Zcは未知数であることに着目すると、数1は、画像上での座標(u,v)が既知の場合、その座標に対応する実際の3次元位置は、カメラ座標系の原点を通るある直線上に存在することがわかる。 Here, f is the focal length [mm], (u0, v0) is the image center [Pixel] on the image coordinates, and (dpx, dpy) is the size [mm / Pixel] of one pixel of the image sensor. . Here, paying attention to the fact that Xc, Yc, and Zc are unknown numbers, when the coordinates (u, v) on the image are known, the actual three-dimensional position corresponding to the coordinates is the camera coordinates. It can be seen that it exists on a straight line passing through the origin of the system.
 図12(a)、(b)に示したように、画像上の人物の重心位置を(ug,vg)、そのカメラ座標系での3次元位置を(Xgc,Ygc,Zgc)とする。ここで、図12(a)は空調空間を横から見た模式図、図12(b)は上から見た模式図を示している。また、撮像センサユニット24の設置された高さをH、Xc方向が水平方向に等しいとし、光軸Zcは垂直方向からθの角度を持って設置されているとする。また、撮像センサユニット24の向いている方向を、垂直方向の角度(仰角、垂直線から上方向に測定した角度)α、水平方向の角度(室内機から見て正面の基準線から右向きに測定した角度)βとする。さらに人物の重心の高さをhとすると、空調空間内の3次元位置である、撮像センサユニット24から重心位置までの距離L、および向きWは、次式で計算できる。 As shown in FIGS. 12A and 12B, the center of gravity position of the person on the image is (ug, vg), and the three-dimensional position in the camera coordinate system is (Xgc, Ygc, Zgc). Here, FIG. 12A is a schematic view of the air-conditioned space viewed from the side, and FIG. 12B is a schematic view of the air-conditioned space viewed from above. Further, it is assumed that the height at which the image sensor unit 24 is installed is H, the Xc direction is equal to the horizontal direction, and the optical axis Zc is installed at an angle θ from the vertical direction. Also, the direction in which the image sensor unit 24 is facing is measured in a vertical direction (elevation angle, an angle measured upward from the vertical line) α and a horizontal angle (rightward from the front reference line as viewed from the indoor unit). Angle) β. Furthermore, when the height of the center of gravity of the person is h, the distance L from the imaging sensor unit 24 to the center of gravity position and the direction W, which are three-dimensional positions in the air-conditioned space, can be calculated by the following equations.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 ここで、撮像センサユニット24は、H=約2mの高さに通常設置され、さらに人物の重心の高さhが約80cmであることを考慮すると、数3、数5は、撮像センサユニット24の設置された高さH、および人物の重心の高さhが規定されている場合、画面上の重心位置(ug,vg)より、空調空間内における人物の重心位置(L,W)を一意に求められることを示している。図13(a),(b)は、画像上の重心位置がA~Gの各領域に存在した場合、空調空間内のいずれの領域に人物が存在するかを示している。また、図14(a),(b)は、人物が存在する場合の模式図を示したものである。図14(a)では、人物の重心位置が領域AおよびFに存在するため、図13(b)の領域AおよびFに人物が存在すると判断する。一方、図14(b)では、人物の重心位置が領域Dに存在するため、図13(b)の領域Dに人物が存在すると判断する。 Here, the imaging sensor unit 24 is normally installed at a height of H = about 2 m, and considering that the height h of the center of gravity of the person is about 80 cm, the equations 3 and 5 are Is specified, and the center of gravity (L, W) of the person in the air-conditioned space is uniquely determined from the center of gravity (ug, vg) on the screen. It shows that it is required. FIGS. 13A and 13B show in which area in the air-conditioned space a person exists when the center of gravity position on the image exists in each of the areas A to G. FIG. FIGS. 14A and 14B are schematic diagrams when a person is present. In FIG. 14A, the position of the center of gravity of the person exists in the areas A and F. Therefore, it is determined that the person exists in the areas A and F of FIG. On the other hand, in FIG. 14B, since the position of the center of gravity of the person exists in the area D, it is determined that the person exists in the area D of FIG.
 図15は、撮像センサユニット24を使用して、領域A~Gの各々に後述する領域特性を設定するためのフローチャートで、図16は、撮像センサユニット24を使用して、領域A~Gのどの領域に人がいるか否かを判定するフローチャートであり、これらのフローチャートを参照しながら人の位置判定方法について以下説明する。 FIG. 15 is a flowchart for setting region characteristics to be described later in each of the regions A to G using the image sensor unit 24. FIG. 16 is a flowchart of the regions A to G using the image sensor unit 24. It is a flowchart which determines whether a person exists in which area | region, The person's position determination method is demonstrated below, referring these flowcharts.
 ステップS1において、所定の周期T1(例えば、撮像センサユニット24のフレームレートが5fpsであれば、200ミリ秒)で各領域における人の在否が前述の方法でまず判定される。 In step S1, the presence or absence of a person in each area is first determined by the above-described method at a predetermined cycle T1 (for example, 200 milliseconds if the frame rate of the image sensor unit 24 is 5 fps).
 この判定結果に基づいて各領域A~Gを、人が良くいる第1の領域(良くいる場所)と、人のいる時間が短い第2の領域(人が単に通過する領域、滞在時間の短い領域等の通過領域)と、人のいる時間が非常に短い第3の領域(壁、窓等人が殆ど行かない非生活領域)とに判別する。以下、第1の領域、第2の領域、第3の領域をそれぞれ、生活区分I、生活区分II、生活区分IIIといい、生活区分I、生活区分II、生活区分IIIはそれぞれ、領域特性Iの領域、領域特性IIの領域、領域特性IIIの領域ということもできる。また、生活区分I(領域特性I)、生活区分II(領域特性II)を併せて生活領域(人が生活する領域)とし、これに対し、生活区分III(領域特性III)を非生活領域(人が生活しない領域)とし、人の在否の頻度により生活の領域を大きく分類してもよい。 Based on the determination result, each of the areas A to G is divided into a first area where the person is good (a place where the person is good) and a second area where the person is short (the area where the person simply passes, and the stay time is short). And a third area (a non-living area such as a wall or a window where people hardly go). Hereinafter, the first region, the second region, and the third region are referred to as a life category I, a life category II, and a life category III, respectively, and the life category I, the life category II, and the life category III are respectively a region characteristic I. It can also be said that the region of region characteristic II, region of region characteristic II, region of region characteristic III. Further, the life category I (region characteristic I) and the life category II (region characteristic II) are combined into a life region (region where people live), while the life category III (region characteristic III) is changed to a non-life region ( The area of life may be broadly classified according to the frequency of the presence or absence of a person.
 この判別は、図15のフローチャートにおけるステップS3以降で行われ、この判別方法について図17及び図18を参照しながら説明する。 This determination is performed after step S3 in the flowchart of FIG. 15, and this determination method will be described with reference to FIGS.
 図17は、一つの和室とLD(居間兼食事室)と台所とからなる1LDKのLDに本発明に係る空気調和機の室内機を設置した場合を示しており、図17における楕円で示される領域は被験者が申告した良くいる場所を示している。 FIG. 17 shows a case where the indoor unit of the air conditioner according to the present invention is installed in an LD of 1 LDK composed of one Japanese-style room, LD (living room / dining room) and kitchen, and is indicated by an ellipse in FIG. The area shows the well-placed place where the subject reported.
 上述したように、周期T1毎に各領域A~Gにおける人の在否が判定されるが、周期T1の反応結果(判定)として1(反応有り)あるいは0(反応無し)を出力し、これを複数回繰り返した後、ステップS2において、全てのセンサ出力をクリアする。 As described above, the presence / absence of a person in each of the regions A to G is determined every period T1, and 1 (with a reaction) or 0 (without a reaction) is output as a reaction result (determination) in the period T1, Is repeated a plurality of times, and in step S2, all sensor outputs are cleared.
 ステップS3において、所定の空調機の累積運転時間が経過したかどうかを判定する。ステップS3において所定時間が経過していないと判定されると、ステップS1に戻る一方、所定時間が経過したと判定されると、各領域A~Gにおける当該所定時間に累積した反応結果を二つの閾値と比較することにより各領域A~Gをそれぞれ生活区分I~IIIのいずれかに判別する。 In step S3, it is determined whether or not the cumulative operation time of the predetermined air conditioner has elapsed. If it is determined in step S3 that the predetermined time has not elapsed, the process returns to step S1. On the other hand, if it is determined that the predetermined time has elapsed, the reaction results accumulated in the predetermined time in each of the regions A to G are two. Each region A to G is identified as one of the life categories I to III by comparing with the threshold value.
 長期累積結果を示す図18を参照してさらに詳述すると、第1の閾値及び第1の閾値より小さい第2の閾値を設定して、ステップS4において、各領域A~Gの長期累積結果が第1の閾値より多いかどうかを判定し、多いと判定された領域はステップS5において生活区分Iと判別する。また、ステップS4において、各領域A~Gの長期累積結果が第1の閾値より少ないと判定されると、ステップS6において、各領域A~Gの長期累積結果が第2の閾値より多いかどうかを判定し、多いと判定された領域は、ステップS7において生活区分IIと判別する一方、少ないと判定された領域は、ステップS8において生活区分IIIと判別する。 In more detail with reference to FIG. 18 showing the long-term accumulation result, the first threshold value and the second threshold value smaller than the first threshold value are set, and in step S4, the long-term accumulation results of the respective regions A to G are obtained. It is determined whether or not it is greater than the first threshold value, and the region determined to be greater is determined to be the life category I in step S5. If it is determined in step S4 that the long-term accumulation result of each region A to G is less than the first threshold value, whether or not the long-term accumulation result of each region A to G is greater than the second threshold value in step S6. The region determined to be large is determined to be the life category II in step S7, while the region determined to be small is determined to be the life category III in step S8.
 図18の例では、領域C,D,Gが生活区分Iとして判別され、領域B,Fが生活区分IIとして判別され、領域A,Eが生活区分IIIとして判別される。 In the example of FIG. 18, the areas C, D, and G are determined as the life category I, the areas B and F are determined as the life category II, and the areas A and E are determined as the life category III.
 また、図19は別の1LDKのLDに本発明に係る空気調和機の室内機を設置した場合を示しており、図20はこの場合の長期累積結果を元に各領域A~Gを判別した結果を示している。図19の例では、領域B,C,Eが生活区分Iとして判別され、領域A,Fが生活区分IIとして判別され、領域D,Gが生活区分IIIとして判別される。 FIG. 19 shows a case where the indoor unit of the air conditioner according to the present invention is installed in another LD of 1 LDK, and FIG. 20 discriminates each region A to G based on the long-term accumulation result in this case. Results are shown. In the example of FIG. 19, the areas B, C, and E are determined as the life category I, the areas A and F are determined as the life category II, and the areas D and G are determined as the life category III.
 なお、上述した領域特性(生活区分)の判別は所定時間毎に繰り返されるが、判別すべき室内に配置されたソファー、食卓等を移動することがない限り、判別結果が変わることは殆どない。 It should be noted that the above-described determination of the region characteristics (life classification) is repeated every predetermined time, but the determination result hardly changes unless the sofa, the table, etc. arranged in the room to be determined are moved.
 次に、図16のフローチャートを参照しながら、各領域A~Gにおける人の在否の最終判定について説明する。 Next, the final determination of the presence / absence of a person in each of the areas A to G will be described with reference to the flowchart of FIG.
 ステップS21~S22は、上述した図15のフローチャートにおけるステップS1~S2と同じなので、その説明は省略する。ステップS23において、所定数M(例えば、45回)の周期T1の反応結果が得られたかどうかが判定され、周期T1は所定数Mに達していないと判定されると、ステップS21に戻る一方、周期T1が所定数Mに達したと判定されると、ステップS24において、周期T1×Mにおける反応結果の合計を累積反応期間回数として、1回分の累積反応期間回数を算出する。この累積反応期間回数の算出を複数回繰り返し、ステップS25において、所定回数分(例えば、N=4)の累積反応期間回数の算出結果が得られたかどうかが判定され、所定回数に達していないと判定されると、ステップS21に戻る一方、所定回数に達したと判定されると、ステップS26において、既に判別した領域特性と所定回数分の累積反応期間回数を元に各領域A~Gにおける人の在否を推定する。 Since steps S21 to S22 are the same as steps S1 to S2 in the flowchart of FIG. 15 described above, description thereof is omitted. In step S23, it is determined whether or not a predetermined number M (for example, 45 times) of reaction results in the cycle T1 has been obtained. If it is determined that the cycle T1 has not reached the predetermined number M, the process returns to step S21. If it is determined that the period T1 has reached the predetermined number M, in step S24, the total number of reaction results in the period T1 × M is used as the cumulative reaction period number, and the cumulative reaction period number for one time is calculated. This calculation of the cumulative reaction period is repeated a plurality of times, and it is determined in step S25 whether or not the calculation result of the cumulative reaction period has been obtained for a predetermined number of times (for example, N = 4). When the determination is made, the process returns to step S21. On the other hand, when it is determined that the predetermined number of times has been reached, in step S26, the person in each of the areas A to G is determined based on the already determined area characteristics and the predetermined number of accumulated reaction periods. Presence or absence of is estimated.
 なお、ステップS27において累積反応期間回数の算出回数(N)から1を減算してステップS21に戻ることで、所定回数分の累積反応期間回数の算出が繰り返し行われることになる。 In step S27, by subtracting 1 from the number of times (N) of cumulative reaction period calculations and returning to step S21, the calculation of the cumulative reaction period number for a predetermined number of times is repeatedly performed.
 表1は最新の1回分(時間T1×M)の反応結果の履歴を示しており、表1中、例えばΣA0は領域Aにおける1回分の累積反応期間回数を意味している。 Table 1 shows a history of reaction results for the latest one time (time T1 × M). In Table 1, for example, ΣA0 means the number of cumulative reaction periods for one time in the region A.
Figure JPOXMLDOC01-appb-T000001
Figure JPOXMLDOC01-appb-T000001
 ここで、ΣA0の直前の1回分の累積反応期間回数をΣA1、さらにその前の1回分の累積反応期間回数をΣA2・・・とし、N=4の場合、過去4回分の履歴(ΣA4、ΣA3、ΣA2、ΣA1)のうち、生活区分Iについては、1回以上の累積反応期間回数が1回でもあれば、人がいると判定する。また、生活区分IIについては、過去4回の履歴のうち、1回以上の累積反応期間回数が2回以上あれば、人がいると判定するとともに、生活区分IIIについては、過去4回の履歴のうち、2回以上の累積反応期間回数が3回以上あれば、人がいると判定する。 Here, the cumulative reaction period number of one time immediately before ΣA0 is ΣA1, the previous cumulative reaction period number of ΣA0 is ΣA2,..., And when N = 4, the past four history (ΣA4, ΣA3 , .SIGMA.A2, .SIGMA.A1), for life category I, it is determined that there is a person if the cumulative reaction period is one or more. In addition, for life category II, it is determined that there is a person if the cumulative reaction period of one or more times is two or more in the past four history, and for life category III, the past four history Among them, if the cumulative reaction period number of 2 times or more is 3 times or more, it is determined that there is a person.
 次に、上述した人の在否判定から時間T1×M後には、同様に過去の4回分の履歴と生活区分と累積反応期間回数から人の在否の推定が行われる。 Next, after the time T1 × M from the above-described determination of the presence / absence of the person, the presence / absence of the person is similarly estimated from the past four histories, life categories, and cumulative reaction period times.
 すなわち、本発明に係る空気調和機の室内機においては、所定周期毎の領域判定結果を長期累積した領域特性と、所定周期毎の領域判定結果をN回分累積し、求めた各領域の累積反応期間回数の過去の履歴から人の所在地を推定することで、確率の高い人の位置推定結果を得るようにしている。 That is, in the indoor unit of the air conditioner according to the present invention, the region characteristics obtained by accumulating the region determination results for each predetermined period for a long period of time and the region determination results for each predetermined cycle are accumulated N times, and the cumulative reaction of each region obtained is obtained. By estimating the location of a person from the past history of the number of periods, a position estimation result of a person with high probability is obtained.
 表2は、このようにして人の在否を判定し、T1=0.2秒、M=45回に設定した場合の在推定に要する時間、不在推定に要する時間を示している。 Table 2 shows the time required for the presence estimation and the time required for the absence estimation when the presence / absence of the person is determined in this way and T1 = 0.2 seconds and M = 45 times are set.
Figure JPOXMLDOC01-appb-T000002
Figure JPOXMLDOC01-appb-T000002
 このようにして、本発明に係る空気調和機の室内機により空調すべき領域を撮像センサユニット24により複数の領域A~Gに区分した後、各領域A~Gの領域特性(生活区分I~III)を決定し、さらに各領域A~Gの領域特性に応じて在推定に要する時間、不在推定に要する時間を変更するようにしている。 In this way, after the area to be air-conditioned by the indoor unit of the air conditioner according to the present invention is divided into a plurality of areas A to G by the imaging sensor unit 24, the area characteristics of each area A to G (life classification I to III) is determined, and the time required for presence estimation and the time required for absence estimation are changed according to the region characteristics of the regions A to G.
 すなわち、空調設定を変更した後、風が届くまでには1分程度要することから、短時間(例えば、数秒)で空調設定を変更しても快適性を損なうのみならず、人がすぐいなくなるような場所に対しては、省エネの観点からあまり空調を行わないほうが好ましい。そこで、各領域A~Gにおける人の在否をまず検知し、特に人がいる領域の空調設定を最適化している。 In other words, since it takes about 1 minute for the wind to reach after changing the air conditioning setting, changing the air conditioning setting in a short time (for example, a few seconds) will not only impair comfort, but will also make people short. For such a place, it is preferable not to perform air conditioning so much from the viewpoint of energy saving. Therefore, the presence / absence of a person in each of the areas A to G is first detected, and the air conditioning setting in the area where the person is present is optimized.
 詳述すると、生活区分IIと判別された領域の在否推定に要する時間を標準として、生活区分Iと判別された領域では、生活区分IIと判別された領域より短い時間間隔で人の存在が推定されるのに対し、その領域から人がいなくなった場合には、生活区分IIと判別された領域より長い時間間隔で人の不存在を推定することにより、在推定に要する時間を短く、不在推定に要する時間は長く設定されることになる。逆に、生活区分IIIと判別された領域では、生活区分IIと判別された領域より長い時間間隔で人の存在が推定されるのに対し、その領域から人がいなくなった場合には、生活区分IIと判別された領域より短い時間間隔で人の不存在を推定することにより、在推定に要する時間を長く、不在推定に要する時間は短く設定されることになる。さらに、上述したように長期累積結果によりそれぞれの領域の生活区分は変わり、それに応じて、在推定に要する時間や不在推定に要する時間も可変設定されることになる。 More specifically, with the time required for estimating the presence / absence of the area determined as the life category II as a standard, in the area determined as the life category I, there is a person at a shorter time interval than the area determined as the life category II. In contrast, when there are no people in the area, the absence of the person is estimated at a longer time interval than the area determined as the life category II. The time required for estimation is set to be long. On the other hand, in the area determined to be life category III, the presence of a person is estimated at a longer time interval than the area determined to be life category II. By estimating the absence of a person at a time interval shorter than the region determined as II, the time required for the presence estimation is set longer and the time required for the absence estimation is set shorter. Furthermore, as described above, the life division of each region changes depending on the long-term accumulation result, and accordingly, the time required for the presence estimation and the time required for the absence estimation are variably set.
 以上の説明では、撮像センサユニット24による人位置推定として、差分法を利用したが、もちろん、他の手法を利用してもかまわない。例えば、人物の全身の画像データを利用して、フレーム画像から人らしい領域を抽出するようにしてもかまわない。このような手法としては、例えば、HOG(Histograms of Oriented Gradients)特徴量などを利用する手法が広く知られている(N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection”, In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Vol.1, pp.886-893, 2005.)。HOG特徴量は、局所領域内におけるエッジ方向ごとのエッジ強度に着目した特徴量であり、この特徴量をSVM(Support Vector Machine)などにより学習・識別を行なうことにより、フレーム画像から人物領域を検出するようにしてもかまわない。 In the above description, the difference method is used as the human position estimation by the image sensor unit 24. Of course, other methods may be used. For example, a person-like region may be extracted from the frame image using image data of the whole body of the person. As such a technique, for example, a technique using a HOG (Histograms of Oriented Gradients) feature amount or the like is widely known (N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Proc. IEEE Conf. On Computer Vision and Pattern Recognition, Vol.1, pp.886-893, 2005.). The HOG feature is a feature that focuses on the edge strength in each edge direction within the local region, and the person region is detected from the frame image by learning and identifying this feature using SVM (Support Vector Machine). It doesn't matter if you do.
 図21は、フレーム画像から人らしい領域を抽出する処理を利用した人位置推定の処理の流れを示したフローチャートである。この図において、図5と同じステップに関しては、同一の符号を付しており、ここではその詳細な説明を省略する。 FIG. 21 is a flowchart showing the flow of the human position estimation process using the process of extracting a human-like area from the frame image. In this figure, the same steps as those in FIG. 5 are denoted by the same reference numerals, and detailed description thereof is omitted here.
 ステップS104において、前述のHOG特徴量を利用することで、フレーム画像内において、人らしい領域を人領域として抽出する。 In step S104, a human-like area is extracted as a human area in the frame image by using the HOG feature amount described above.
 ステップS103において、求まった人領域の重心位置を計算することにより、検出された人物の位置を検出する。画像の重心位置から人物の位置を検出するためには、前述の通り、数3、数5を利用すればよい。 In step S103, the position of the detected person is detected by calculating the position of the center of gravity of the obtained human area. In order to detect the position of the person from the position of the center of gravity of the image, Equations 3 and 5 may be used as described above.
 また、人物の全身の画像データを利用するのではなく、フレーム画像から顔らしい領域を抽出するようにしてもかまわない。このような手法としては、例えば、Haar-Like特徴量などを利用する手法が広く知られている(P. Viola and M. Jones, “Robust real-time face detection”, International Journal of Computer Vision, Vol.57, no.2, pp.137-154, 2004.)。Haar-Like特徴量は、局所領域間における輝度差に着目した特徴量であり、この特徴量をSVM(Support Vector Machine)などにより学習・識別を行なうことにより、フレーム画像から人物領域を検出するようにしてもかまわない。 Also, instead of using image data of a person's whole body, a face-like area may be extracted from the frame image. As such a method, for example, a method using Haar-Like features is widely known (P. Viola and M. Jones, “Robust real-time face detection”, International Journal of Computer Vision, .57, no.2, pp.137-154, 2004.). The Haar-Like feature amount is a feature amount focusing on the luminance difference between local regions, and this feature amount is learned and identified by SVM (Support Vector Machine) or the like to detect a person region from a frame image. It doesn't matter.
 図22は、フレーム画像から顔らしい領域を抽出する処理を利用した人位置推定の処理の流れを示したフローチャートである。この図において、図5と同じステップに関しては、同一の符号を付しており、ここではその詳細な説明を省略する。 FIG. 22 is a flowchart showing a flow of a human position estimation process using a process of extracting a face-like area from a frame image. In this figure, the same steps as those in FIG. 5 are denoted by the same reference numerals, and detailed description thereof is omitted here.
 ステップS105において、前述のHaar-Like特徴量を利用することで、フレーム画像内において、顔らしい領域を顔領域として抽出する。 In step S105, a face-like area is extracted as a face area in the frame image by using the Haar-Like feature amount described above.
 ステップS103において、求まった顔領域の重心位置を計算することにより、検出された人物の位置を検出する。画像の重心位置から人物の位置を検出するためには、前述の通り、透視投影変換を利用すればよい。このとき、人物の全身領域を利用して、その重心位置から人物の位置を検出する場合、人物の重心の高さとして、h=約80cmとしたが、顔領域を利用する場合、顔重心までの高さとして、h=約160cmとして数3、数5を利用することにより、人物の位置を検出する。 In step S103, the position of the detected person is detected by calculating the position of the center of gravity of the obtained face area. In order to detect the position of the person from the position of the center of gravity of the image, as described above, perspective projection conversion may be used. At this time, when the position of the person is detected from the position of the center of gravity using the whole body area of the person, the height of the center of gravity of the person is set to h = about 80 cm. The position of a person is detected by using Equations 3 and 5 assuming that h = about 160 cm.
<障害物検知手段の構成>
 前述の撮像センサユニット24を利用して、障害物検出を行なう、この障害物検知手段について説明する。なお、本明細書で使用する「障害物」という用語は、室内機の吹出口10から吹き出され居住者に快適空間を提供するための空気の流れを妨げる物全般を指しており、例えばテーブルやソファー等の家具、テレビ、オーディオ等の居住者以外の物を総称したものである。
<Configuration of obstacle detection means>
The obstacle detection means for detecting an obstacle using the above-described imaging sensor unit 24 will be described. As used herein, the term “obstacle” refers to all objects that are blown out from the air outlet 10 of the indoor unit and impede the flow of air to provide a comfortable space for residents. It is a collective term for non-residents such as furniture such as sofas, televisions, and audio.
 本実施の形態においては、障害物検出手段により居住空間の床面を図12に示した通り、垂直方向の角度αと水平方向の角度βに基づき、図23に示されるように細分化し、これらの領域の各々を障害物位置判別領域あるいは「ポジション」と定義し、どのポジションに障害物が存在しているかを判別するようにしている。なお、図23に示される全ポジションは、図13(b)に示される人位置判別領域の全領域と略一致しており、図13(b)の領域境界を図23のポジション境界に略一致させ、領域及びポジションを次のように対応させることで、後述する空調制御を容易に行うことができ、記憶させるメモリを極力少なくしている。 In the present embodiment, as shown in FIG. 12, the floor surface of the living space is subdivided as shown in FIG. 23 based on the vertical angle α and the horizontal angle β as shown in FIG. Each of these areas is defined as an obstacle position determination area or “position” to determine in which position an obstacle exists. Note that all the positions shown in FIG. 23 substantially coincide with the whole area of the human position determination area shown in FIG. 13B, and the area boundary in FIG. 13B substantially coincides with the position boundary in FIG. Thus, by associating the areas and positions as follows, the air conditioning control described later can be easily performed, and the memory to be stored is reduced as much as possible.
 領域A:ポジションA1+A2+A3
 領域B:ポジションB1+B2
 領域C:ポジションC1+C2
 領域D:ポジションD1+D2
 領域E:ポジションE1+E2
 領域F:ポジションF1+F2
 領域G:ポジションG1+G2
 なお、図23の領域分割は、ポジションの領域数を人位置判別領域の領域数より多く設定しており、人位置判別領域の各々に少なくとも二つのポジションが属し、これら少なくとも二つの障害物位置判別領域を室内機から見て左右に配置しているが、各人位置判別領域に少なくとも一つのポジションが属するように領域分割して、空調制御を行うこともできる。
Area A: Position A1 + A2 + A3
Area B: Position B1 + B2
Area C: Position C1 + C2
Area D: Position D1 + D2
Area E: Position E1 + E2
Area F: Position F1 + F2
Area G: Position G1 + G2
In the area division of FIG. 23, the number of position areas is set larger than the number of areas of the human position determination area, and at least two positions belong to each of the human position determination areas, and these at least two obstacle position determinations. Although the areas are arranged on the left and right as viewed from the indoor unit, the air conditioning control can be performed by dividing the area so that at least one position belongs to each person position determination area.
 また、図23の領域分割は、複数の人位置判別領域の各々が、室内機までの距離に応じて区分され、近い領域の人位置判別領域に属するポジションの領域数を遠い領域の人位置判別領域に属するポジションの領域数より多く設定しているが、室内機からの距離にかかわらず、各人位置判別領域に属するポジション数を同数にしてもよい。 In addition, in the area division of FIG. 23, each of the plurality of person position determination areas is divided according to the distance to the indoor unit, and the number of areas belonging to the person position determination area in the near area is determined as the person position determination in the far area. The number of positions belonging to the area is set to be larger than the number of areas belonging to the area, but the number of positions belonging to each person position determination area may be the same regardless of the distance from the indoor unit.
<障害物検知手段の検知動作及びデータ処理>
 上述したように、本発明に係る空気調和機は、人体検知手段により領域A~Gにおける人の在否を検知するとともに、障害物検知手段によりポジションA1~G2における障害物の有無を検知し、人体検知手段の検知信号(検知結果)と障害物検知手段の検知信号(検知結果)に基づいて、風向変更手段である上下羽根12及び左右羽根14を駆動制御することにより、快適空間を提供するようにしている。
<Detection operation and data processing of obstacle detection means>
As described above, the air conditioner according to the present invention detects the presence or absence of a person in the regions A to G by the human body detection means, and detects the presence or absence of an obstacle in the positions A1 to G2 by the obstacle detection means. Based on the detection signal (detection result) of the human body detection means and the detection signal (detection result) of the obstacle detection means, the comfortable space is provided by driving and controlling the upper and lower blades 12 and the left and right blades 14 as the wind direction changing means. I am doing so.
 人体検知手段は、前述の通り、例えば人物が動くことを利用し、空調空間内で動きがある物体を検知することにより人の在否を検知することができるのに対し、障害物検知手段は、撮像センサユニット24により障害物の距離を検知することから、人と障害物を判別することができない。 As described above, the human body detection means can detect the presence or absence of a person by using an object that moves, for example, by detecting an object that moves in the air-conditioned space. Since the distance between the obstacles is detected by the image sensor unit 24, the person and the obstacle cannot be distinguished.
 人を障害物として誤認すると、人がいる領域を空調できなかったり、人に空調風(気流)を直接当ててしまったりすることもあり、結果として非効率な空調制御あるいは人に不快感を与える空調制御となるおそれがある。 If a person is mistaken as an obstacle, the area where the person is located may not be air-conditioned, or the person may be directly conditioned by air-conditioning airflow (airflow), resulting in inefficient air conditioning control or discomfort to the person. There is a risk of air conditioning control.
 そこで、障害物検知手段について、以下に説明するデータ処理を行って障害物のみを検知するようにしている。 Therefore, the obstacle detection means detects the obstacle only by performing the data processing described below.
 まず、撮像センサユニットを利用した障害物検知手段について説明する。撮像センサユニットを利用して障害物を検出するために、ステレオ法を利用する。ステレオ法は、複数の撮像センサユニット24及び26を利用し、その視差を利用して被写体までの距離を推定する手法である。図24は、ステレオ法による障害物検出を説明するための模式図である。図において、撮像センサユニット24及び26を利用して、障害物である点Pまでの距離を計測している。また、fは焦点距離、Bは二つの撮像センサユニット24及び26の焦点間の距離、u1は撮像センサユニット24の画像上での障害物のu座標、また、u1の撮像センサユニット26の画像上での対応点のu座標をu2、Xは撮像センサユニットから点Pまでの距離を示している。また、2つの撮像センサユニット24及び26の画像中心位置は等しいとする。このとき、撮像センサユニットから点Pまでの距離Xは、次式より求められる。 First, the obstacle detection means using the image sensor unit will be described. In order to detect an obstacle using the imaging sensor unit, a stereo method is used. The stereo method uses a plurality of image sensor units 24 and 26 and estimates the distance to the subject using the parallax. FIG. 24 is a schematic diagram for explaining obstacle detection by the stereo method. In the figure, the distance to a point P that is an obstacle is measured using the image sensor units 24 and 26. Also, f is the focal length, B is the distance between the focal points of the two image sensor units 24 and 26, u1 is the u coordinate of the obstacle on the image of the image sensor unit 24, and the image of the image sensor unit 26 of u1. The u coordinate of the corresponding point in the above is u2, and X is the distance from the image sensor unit to the point P. Further, it is assumed that the image center positions of the two image sensor units 24 and 26 are equal. At this time, the distance X from the imaging sensor unit to the point P is obtained from the following equation.
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 この式から、撮像センサユニットから障害物の点Pまでの距離Xは、撮像センサユニット間24,26の視差|u1-u2|に依存することがわかる。 From this equation, it can be seen that the distance X from the imaging sensor unit to the obstacle point P depends on the parallax | u1-u2 | between the imaging sensor units 24 and 26.
 また、対応点の探索は、テンプレートマッチング法を利用したブロックマッチング法などを利用すればよい。以上のように、撮像センサユニットを利用することで、空調空間内の距離測定(障害物の位置検知)を行なう。 In addition, the search for corresponding points may use a block matching method using a template matching method. As described above, distance measurement (detection of the position of an obstacle) in the air-conditioned space is performed by using the imaging sensor unit.
 数3、数5、数6より、障害物の位置は、画素位置と視差によって推定されることがわかる。表3におけるi及びjは、計測すべき画素位置を示しており、垂直方向の角度及び水平方向の角度は、上述した仰角α及び室内機から見て正面の基準線から右向きに測定した角度βをそれぞれ示している。すなわち、室内機から見て、垂直方向に5度~80度、水平方向に-80度~80度の範囲で各画素を設定し、撮像センサユニットは各画素の視差を計測する。 From Equations 3, 5, and 6, it can be seen that the position of the obstacle is estimated from the pixel position and the parallax. I and j in Table 3 indicate pixel positions to be measured. The vertical angle and the horizontal angle are the above-described elevation angle α and angle β measured rightward from the front reference line when viewed from the indoor unit. Respectively. That is, when viewed from the indoor unit, each pixel is set in the range of 5 to 80 degrees in the vertical direction and −80 to 80 degrees in the horizontal direction, and the image sensor unit measures the parallax of each pixel.
Figure JPOXMLDOC01-appb-T000003
Figure JPOXMLDOC01-appb-T000003
 すなわち、空気調和機は、画素[14,15]から画素[142,105]までの各画素で視差を測定することで距離測定(障害物の位置検知)を行なう。 That is, the air conditioner performs distance measurement (detection of the position of an obstacle) by measuring parallax at each pixel from pixel [14,15] to pixel [142,105].
 また、空気調和機の運転開始時の障害物検知手段の検知範囲を、仰角10度以上に制限するようにしてもかまわない。これは、空気調和機の運転開始時には人がいる可能性が高く、人を検知しない可能性が高い領域のみ、すなわち壁がある領域を距離測定することで、計測データを有効利用できるからである(人は障害物ではないので、後述するように、人がいる領域のデータは使用しない)。 Also, the detection range of the obstacle detection means at the start of the operation of the air conditioner may be limited to an elevation angle of 10 degrees or more. This is because the measurement data can be effectively used by measuring the distance only in the area where there is a high possibility that there is a person at the start of the operation of the air conditioner and there is a high possibility that the person will not be detected, that is, the area where the wall is located. (Since the person is not an obstacle, the data of the area where the person is present is not used as will be described later).
 次に、障害物までの距離測定について、図25のフローチャートを参照しながら説明する。 Next, distance measurement to the obstacle will be described with reference to the flowchart of FIG.
 まずステップS41において、現在の画素に対応する領域(図13に示される領域A~Gのいずれか)に人がいないと判定された場合には、ステップS42に移行する一方、人がいると判定された場合には、ステップS43に移行する。すなわち、人は障害物ではないので、人がいると判定された領域に対応する画素では、距離測定を行うことなく以前の距離データを使用し(距離データを更新しない)、人がいないと判定された領域に対応する画素においてのみ距離測定を行い、新たに測定した距離データを使用する(距離データを更新する)ように設定する。 First, in step S41, when it is determined that there is no person in the area corresponding to the current pixel (any one of the areas A to G shown in FIG. 13), the process proceeds to step S42 while it is determined that there is a person. If so, the process proceeds to step S43. That is, since the person is not an obstacle, the pixel corresponding to the area determined to have a person uses the previous distance data without performing distance measurement (does not update the distance data) and determines that there is no person. The distance measurement is performed only in the pixel corresponding to the region thus set, and the newly measured distance data is set to be used (distance data is updated).
 すなわち、各障害物位置判別領域において障害物の有無判定を行うに際し、各障害物位置判別領域に対応する人位置判別領域における人の在否判定結果に応じて、各障害物位置判別領域における障害物検知手段の判定結果を更新するか否かを決定することで、障害物の有無判定を効率的に行っている。より具体的には、人体検知手段により人がいないと判定された人位置判別領域に属する障害物位置判別領域においては、障害物検知手段による前回の判定結果を新たな判定結果で更新する一方、人体検知手段により人がいると判定された人位置判別領域に属する障害物位置判別領域においては、障害物検知手段による前回の判定結果を新たな判定結果で更新しないようにしている。 That is, when performing the presence / absence determination of an obstacle in each obstacle position determination area, an obstacle in each obstacle position determination area is determined according to the result of the presence / absence determination of a person in the person position determination area corresponding to each obstacle position determination area. By determining whether or not to update the determination result of the object detection means, the presence / absence determination of an obstacle is efficiently performed. More specifically, in the obstacle position determination area belonging to the human position determination area determined that there is no person by the human body detection means, the previous determination result by the obstacle detection means is updated with a new determination result, In the obstacle position determination area belonging to the human position determination area determined to have a person by the human body detection means, the previous determination result by the obstacle detection means is not updated with a new determination result.
 ステップS42において、前述のブロックマッチング法を利用することで、各画素の視差を計算し、ステップS44へ移行する。 In step S42, the above-described block matching method is used to calculate the parallax of each pixel, and the process proceeds to step S44.
 ステップS44においては、同じ画素で8回のデータを取得し、取得したデータに基づく距離測定が完了したかどうかの判定が行われ、距離測定が完了していないと判定されると、ステップS41に戻る。逆に、ステップS44において、距離測定が完了したと判定されると、ステップS45に移行する。 In step S44, data is acquired eight times with the same pixel, and it is determined whether distance measurement based on the acquired data is complete. If it is determined that distance measurement is not complete, the process proceeds to step S41. Return. Conversely, if it is determined in step S44 that the distance measurement has been completed, the process proceeds to step S45.
 ステップS45において、その信頼性を評価することで、距離推定の精度を向上させる。すなわち、信頼性があると判断した場合、ステップS46において距離番号確定処理を行う一方、信頼性がないと判断した場合には、ステップS47において近傍の距離番号をその画素の距離データとして処理を行なう。 In step S45, the reliability of the distance estimation is improved by evaluating the reliability. That is, if it is determined that there is reliability, a distance number determination process is performed in step S46. On the other hand, if it is determined that there is no reliability, a nearby distance number is processed as distance data of the pixel in step S47. .
 なお、これらの処理は撮像センサ24及び26で行われることから、撮像センサユニット24及び26は障害物位置検知手段として作用する。 Since these processes are performed by the image sensors 24 and 26, the image sensor units 24 and 26 function as obstacle position detection means.
 次にステップS46における距離番号確定処理を説明するが、用語「距離番号」についてまず説明する。 Next, the distance number determination process in step S46 will be described. First, the term “distance number” will be described.
 「距離番号」は、撮像センサユニットから空調空間のある位置Pまでのおおよその距離を意味しており、図26に示されるように、撮像センサユニットは床面から2m上方に設置され、撮像センサユニットから位置Pまでの距離を「距離番号相当の距離」X[m]と
すると、位置Pは次の式で表される。
The “distance number” means an approximate distance from the image sensor unit to the position P where the air-conditioned space is located. As shown in FIG. 26, the image sensor unit is installed 2 m above the floor, and the image sensor Assuming that the distance from the unit to the position P is “distance corresponding to the distance number” X [m], the position P is expressed by the following equation.
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
 数6に示したように、距離番号相当の距離Xは撮像センサユニット間24,26の視差に依存する。また、距離番号は2~12までの整数値とし、各距離番号に相当する距離を表4のように設定している。 As shown in Equation 6, the distance X corresponding to the distance number depends on the parallax between the image sensor units 24 and 26. The distance number is an integer value from 2 to 12, and the distance corresponding to each distance number is set as shown in Table 4.
Figure JPOXMLDOC01-appb-T000004
Figure JPOXMLDOC01-appb-T000004
 なお、表4は、各距離番号と数2によって各画素のv座標値によって決定される仰角(α)に相当する位置Pの位置を示しており、黒色を付した部分は、hがマイナスの値となり(h<0)、床に食い込む位置を示している。また、表4の設定は、能力ランク2.2kwの空気調和機に適用されるものであり、この空気調和機は専ら6畳の部屋(対角距離=4.50m)に設置されるものとして、距離番号=9を制限値(最大値D)として設定している。すなわち、6畳の部屋では、距離番号≧10に相当する位置は、対角距離>4.50mで部屋の壁を越えた位置(部屋の外側の位置)となり、全く意味を持たない距離番号であり、黒色で示している。 Table 4 shows the position of the position P corresponding to the elevation angle (α) determined by the v-coordinate value of each pixel according to each distance number and the number 2, and in the black part, h is negative. Value (h <0), indicating the position to bite into the floor. The settings in Table 4 are applied to an air conditioner with a capacity rank of 2.2 kw, and this air conditioner is installed exclusively in a 6 tatami room (diagonal distance = 4.50 m). The distance number = 9 is set as the limit value (maximum value D). That is, in a 6-tatami room, the position corresponding to the distance number ≧ 10 is a position that exceeds the wall of the room with a diagonal distance> 4.50 m (a position outside the room), and is a distance number that has no meaning at all. Yes, in black.
 因みに、表5は、能力ランク6.3kwの空気調和機に適用されるものであり、この空気調和機は専ら20畳の部屋(対角距離=8.49m)に設置されるものとして、距離番号=12を制限値(最大値D)として設定している。 Incidentally, Table 5 applies to an air conditioner with a capability rank of 6.3 kW, and this air conditioner is installed in a 20 tatami room (diagonal distance = 8.49 m). Number = 12 is set as the limit value (maximum value D).
Figure JPOXMLDOC01-appb-T000005
Figure JPOXMLDOC01-appb-T000005
 表6は、空気調和機の能力ランクと各画素の仰角に応じて設定された距離番号の制限値を示している。 Table 6 shows the limit value of the distance number set according to the capability rank of the air conditioner and the elevation angle of each pixel.
Figure JPOXMLDOC01-appb-T000006
Figure JPOXMLDOC01-appb-T000006
 次に、ステップS45における信頼性評価処理と、ステップS46における距離番号確定処理について説明する。 Next, the reliability evaluation process in step S45 and the distance number determination process in step S46 will be described.
 上述したように、距離番号には、空気調和機の能力ランクと各画素の仰角に応じて制限値が設定されており、距離番号推定結果がN>最大値Dの場合でも、複数の測定結果において、すべての結果が距離番号=Nでなければ、距離番号=Dに設定される。 As described above, a limit value is set for the distance number according to the capability rank of the air conditioner and the elevation angle of each pixel, and a plurality of measurement results are obtained even when the distance number estimation result is N> maximum value D. If all results are not distance number = N, distance number = D is set.
 各画素で8回分の距離番号を決定し、大きい方から順に2つの距離番号と小さい方から順に2つの距離番号を除いて、残り4つの距離番号の平均値を取り、距離番号を確定する。ブロックマッチング法によるステレオ法を用いる場合、輝度変化のない障害物を検出する場合、視差計算は安定せず、測定する毎に大きく異なる視差結果(距離番号)が検出されてしまう。そこで、ステップS45において、残り4つの距離番号の値を比較し、そのばらつきが閾値以上である場合、ステップS47において、その距離番号の値は信頼性がないとしてその画素での距離推定をあきらめ、近傍画素において推定されている距離番号を利用する。なお、平均値は小数点以下を切り上げて量子化した整数値とし、このようにして確定された距離番号に相当する位置は、表4あるいは表5に記載のとおりである。 距離 Determine the distance number for 8 times for each pixel, remove the two distance numbers in order from the largest, and remove the two distance numbers in order from the smallest, and take the average value of the remaining four distance numbers to determine the distance number. When the stereo method based on the block matching method is used, when an obstacle having no luminance change is detected, the parallax calculation is not stable, and a parallax result (distance number) that differs greatly every time measurement is performed. Therefore, in step S45, the values of the remaining four distance numbers are compared, and if the variation is equal to or greater than the threshold value, in step S47, the distance number value is given as not reliable because the distance number value is not reliable. The distance number estimated in the neighboring pixels is used. The average value is an integer value obtained by rounding up the decimal point and quantizing, and the position corresponding to the distance number thus determined is as shown in Table 4 or Table 5.
 なお、本実施の形態では、各画素で8つの距離番号を決定し、大小それぞれ2つの距離番号を除いて、残り4つの距離番号の平均値を取り、距離番号を確定するようにしたが、各画素で決定する距離番号は8つに限られるものではなく、平均値を取る距離番号も4つに限られるものではない。 In the present embodiment, eight distance numbers are determined for each pixel, and the distance number is determined by taking an average value of the remaining four distance numbers except for two distance numbers, each of which is larger and smaller. The number of distances determined for each pixel is not limited to eight, and the number of distances taking an average value is not limited to four.
 すなわち、各障害物位置判別領域において障害物の有無判定を行うに際し、各障害物位置判別領域に対応する人位置判別領域における人の在否判定結果に応じて、各障害物位置判別領域における障害物検知手段の判定結果を更新するか否かを決定することで、障害物の有無判定を効率的に行っている。より具体的には、人体検知手段により人がいないと判定された人位置判別領域に属する障害物位置判別領域においては、障害物検知手段による前回の判定結果を新たな判定結果で更新する一方、人体検知手段により人がいると判定された人位置判別領域に属する障害物位置判別領域においては、障害物検知手段による前回の判定結果を新たな判定結果で更新しないようにしている。 That is, when performing the presence / absence determination of an obstacle in each obstacle position determination area, an obstacle in each obstacle position determination area is determined according to the result of the presence / absence determination of a person in the person position determination area corresponding to each obstacle position determination area. By determining whether or not to update the determination result of the object detection means, the presence / absence determination of an obstacle is efficiently performed. More specifically, in the obstacle position determination area belonging to the human position determination area determined that there is no person by the human body detection means, the previous determination result by the obstacle detection means is updated with a new determination result, In the obstacle position determination area belonging to the human position determination area determined to have a person by the human body detection means, the previous determination result by the obstacle detection means is not updated with a new determination result.
 なお、図25のフローチャートにおけるステップS43において、以前の距離データを使用するようにしたが、空気調和機の据え付け直後は以前のデータは存在しないので、障害物検知手段による各障害物位置判別領域における判定が初回の場合には、デフォルト値を使用することとし、デフォルト値としては、上述した制限値(最大値D)が使用される。 In addition, in step S43 in the flowchart of FIG. 25, the previous distance data is used. However, since the previous data does not exist immediately after the installation of the air conditioner, each obstacle position determination area by the obstacle detection unit is used. When the determination is the first time, the default value is used, and the limit value (maximum value D) described above is used as the default value.
 図27は、ある居住空間の立面図(撮像センサユニットを通る縦断面図)であり、撮像センサユニットの下方2mに床面があり、床面から0.7~1.1mにテーブル等の障害物がある場合の測定結果を図面化したもので、図中、網掛け部、右上がりの斜線部、右下がりの斜線部は、近距離、中距離、遠距離(これらの距離については後述する)にそれぞれ障害物があるものと判定されている。 FIG. 27 is an elevation view (longitudinal sectional view passing through the image sensor unit) of a certain living space. The floor surface is 2 m below the image sensor unit, and a table or the like is 0.7 to 1.1 m from the floor surface. The measurement results when there is an obstacle are shown in the drawing. In the figure, the shaded area, the upward-sloping shaded part, and the downward-sloping shaded part are short distance, medium distance, and long distance (these distances will be described later) Are determined to have obstacles.
<障害物回避制御>
 上記障害物の存否判定に基づき、風向変更手段としての上下羽根12及び左右羽根14は、暖房時次のように制御される。
<Obstacle avoidance control>
Based on the determination of the presence or absence of the obstacle, the upper and lower blades 12 and the left and right blades 14 as the wind direction changing means are controlled as follows during heating.
 以下の説明においては、用語「ブロック」、「フィールド」、「近距離」、「中距離」、「遠距離」を使用するが、これらの用語をまず説明する。 In the following description, the terms “block”, “field”, “short distance”, “medium distance”, and “far distance” are used. These terms will be described first.
 図13に示される領域A~Gは次のブロックにそれぞれ属している。 The areas A to G shown in FIG. 13 belong to the following blocks, respectively.
 ブロックN:領域A
 ブロックR:領域B,E
 ブロックC:領域C,F
 ブロックL:領域D,G
 また、領域A~Gは次のフィールドにそれぞれ属している。
Block N: Region A
Block R: Regions B and E
Block C: Regions C and F
Block L: Regions D and G
Regions A to G belong to the following fields, respectively.
 フィールド1:領域A
 フィールド2:領域B,D
 フィールド3:領域C
 フィールド4:領域E,G
 フィールド5:領域F
 さらに、室内機からの距離については次のように定義している。
Field 1: Area A
Field 2: Regions B and D
Field 3: Region C
Field 4: Regions E and G
Field 5: Region F
Furthermore, the distance from the indoor unit is defined as follows.
 近距離:領域A
 中距離:領域B,C,D
 遠距離:領域E,F,G
 表7は、左右羽根14を構成する5枚の左羽根と5枚の右羽根の各ポジションにおける目標設定角度を示しており、数字(角度)に付した記号は、図28に示されるように、左羽根あるいは右羽根が内側に向く場合をプラス(+、表7では無記号)の方向、外側に向く場合をマイナス(-)の方向と定義している。
Short distance: Area A
Medium distance: Regions B, C, D
Long distance: Regions E, F, G
Table 7 shows the target setting angles at the positions of the five left blades and the five right blades constituting the left and right blades 14, and the symbols attached to the numbers (angles) are as shown in FIG. The case where the left or right blade is directed inward is defined as a plus (+, no symbol in Table 7) direction, and the case where it is directed outward is defined as a minus (−) direction.
Figure JPOXMLDOC01-appb-T000007
Figure JPOXMLDOC01-appb-T000007
 また、表7における「暖房B領域」とは、障害物回避制御を行う暖房領域のことであり、「通常自動風向制御」とは、障害物回避制御を行わない風向制御のことである。ここで、障害物回避制御を行うかどうかの判定は、室内熱交換器6の温度を基準としており、温度が低い場合は居住者に風を当てない風向制御、高すぎる場合は最大風量位置の風向制御、適度な温度の場合は暖房B領域への風向制御を行う。また、ここでいう「温度が低い」、「高すぎる」、「居住者に風を当てない風向制御」、「最大風量位置の風向制御」とは、次のとおりの意味である。 Also, the “heating B area” in Table 7 is a heating area where obstacle avoidance control is performed, and “normal automatic wind direction control” is wind direction control where obstacle avoidance control is not performed. Here, the determination as to whether or not to perform the obstacle avoidance control is based on the temperature of the indoor heat exchanger 6. When the temperature is low, the wind direction control is not applied to the occupant, and when it is too high, the maximum air volume position is determined. In the case of wind direction control and moderate temperature, wind direction control to the heating B area is performed. In addition, “temperature is low”, “too high”, “wind direction control that does not apply wind to the occupant”, and “wind direction control at the maximum airflow position” have the following meanings.
 ・低い温度:室内熱交換器6の温度は皮膚温度(33~34℃)を最適温度として設定しており、この温度以下になりうる温度(例えば、32℃)
 ・高すぎる温度:例えば、56℃以上
 ・居住者に風を当てない風向制御:居住空間に風を送らないように、上下羽根12を角度制御して、風が天井に沿うように流れる風向制御
 ・最大風量位置の風向制御:空気調和機は、上下羽根12及び左右羽根14により気流を曲げると必ず抵抗(損失)が発生することから、最大風量位置とは損失が限りなく0に近くなる風向制御(左右羽根14の場合、まっすぐ正面を向いた位置であり、上下羽根12の場合、水平から35度下を向いた位置)
-Low temperature: The temperature of the indoor heat exchanger 6 is set to the skin temperature (33 to 34 ° C) as the optimum temperature, and a temperature that can be lower than this temperature (for example, 32 ° C).
-Too high temperature: for example, 56 ° C or higher-Wind direction control that does not direct wind to the occupant: Wind direction control that causes the wind to flow along the ceiling by controlling the angle of the upper and lower blades 12 so as not to send wind to the living space -Wind direction control at the maximum airflow position: When the air conditioner bends the airflow with the upper and lower blades 12 and the left and right blades 14, resistance (loss) is always generated, so the maximum airflow position is the wind direction where the loss is close to zero. Control (in the case of the left and right blades 14, it is a position facing directly in front, and in the case of the upper and lower blades 12, it is a position facing downward 35 degrees from the horizontal)
 表8は、障害物回避制御を行う場合の上下羽根12の各フィールドにおける目標設定角度を示している。なお、表8における上羽根の角度(γ1)及び下羽根の角度(γ2)は垂直線から上方向に測定した角度(仰角)である。 Table 8 shows target setting angles in each field of the upper and lower blades 12 when performing obstacle avoidance control. In Table 8, the upper blade angle (γ1) and the lower blade angle (γ2) are angles (elevation angles) measured upward from the vertical line.
Figure JPOXMLDOC01-appb-T000008
Figure JPOXMLDOC01-appb-T000008
 次に、障害物の位置に応じた障害物回避制御について具体的に説明するが、障害物回避制御において使用される用語「スイング動作」「ポジション停留稼動」「ブロック停留稼動」についてまず説明する。 Next, the obstacle avoidance control according to the position of the obstacle will be described in detail. The terms “swing operation”, “position stop operation”, and “block stop operation” used in the obstacle avoidance control will be described first.
 スイング動作とは、左右羽根14の揺動動作のことで、基本的には目標の一つのポジションを中心に所定の左右角度幅で揺動し、スイングの両端で固定時間がない動作のことである。 The swinging motion is a swinging motion of the left and right blades 14, and basically swinging with a predetermined left-right angle width around one target position and having no fixed time at both ends of the swing. is there.
 また、ポジション停留稼動とは、あるポジションの目標設定角度(表7の角度)に対し、表9の補正を行い、それぞれ、左端及び右端とする。動作としては、左端と右端でそれぞれ風向固定時間(左右羽根14を固定する時間)を持ち、例えば、左端で風向固定時間が経過した場合、右端に移動し、右端で風向固定時間が経過するまで、右端の風向を維持し、風向固定時間の経過後、左端に移動し、それを繰り返すものである。風向固定時間は、例えば60秒に設定される。 Also, the position stop operation means that the target setting angle (angle in Table 7) of a certain position is corrected as shown in Table 9 to be the left end and the right end, respectively. As the operation, the left end and the right end each have a wind direction fixing time (time for fixing the left and right blades 14). For example, when the wind direction fixing time elapses at the left end, it moves to the right end and the wind direction fixing time elapses at the right end. The wind direction at the right end is maintained, and after the fixed time of the wind direction has passed, it moves to the left end and repeats it. The wind direction fixing time is set to 60 seconds, for example.
Figure JPOXMLDOC01-appb-T000009
Figure JPOXMLDOC01-appb-T000009
 すなわち、あるポジションに障害物がある場合に、そのポジションの目標設定角度をそのまま使用すると、温風が常に障害物に当たるが、表9の補正を行うことで、障害物の横から温風を人がいる位置に到達させることができる。 In other words, if there is an obstacle at a certain position and the target setting angle of that position is used as it is, the hot air always hits the obstacle. It is possible to reach the position where there is.
 さらにブロック停留稼動とは、各ブロックの左端と右端に対応する左右羽根14の設定角度を、例えば表10に基づいて決定する。動作としては、各ブロックの左端と右端でそれぞれ風向固定時間を持ち、例えば、左端で風向固定時間が経過した場合、右端に移動し、右端で風向固定時間が経過するまで、右端の風向を維持し、風向固定時間の経過後、左端に移動し、それを繰り返すものである。風向固定時間は、ポジション停留稼動と同様に、例えば60秒に設定される。なお、各ブロックの左端と右端は、そのブロックに属する人位置判別領域の左端と右端に一致しているので、ブロック停留稼動は、人位置判別領域の停留稼動と言うこともできる。 Further, in the block stop operation, the set angles of the left and right blades 14 corresponding to the left end and the right end of each block are determined based on, for example, Table 10. The operation has a fixed wind direction at the left and right ends of each block.For example, when the fixed wind direction has elapsed at the left end, it moves to the right end and maintains the right wind direction until the fixed wind direction has elapsed at the right end. Then, after the elapse of the wind direction fixing time, it moves to the left end and repeats it. The wind direction fixing time is set to 60 seconds, for example, similarly to the position stop operation. Since the left end and the right end of each block coincide with the left end and the right end of the person position determination area belonging to the block, the block stop operation can be said to be a stop operation of the person position determination area.
Figure JPOXMLDOC01-appb-T000010
Figure JPOXMLDOC01-appb-T000010
 なお、ポジション停留稼動とブロック停留稼動は、障害物の大きさに応じて使い分けている。前方の障害物が小さい場合、障害物のあるポジションを中心にポジション停留稼動を行うことで障害物を避けて送風するのに対し、前方の障害物が大きく、例えば人がいる領域の前方全体に障害物がある場合、ブロック停留稼動を行うことで広い範囲にわたって送風するようにしている。 In addition, position stop operation and block stop operation are properly used according to the size of the obstacle. When the obstacles in front are small, the position is stopped around the position where there are obstacles to avoid obstacles and blow, whereas the obstacles in front are large, for example, in front of the area where people are When there is an obstacle, the air is blown over a wide range by performing a block stop operation.
 本実施の形態においては、スイング動作とポジション停留稼動とブロック停留稼動を総称して、左右羽根14の揺動動作と称している。 In the present embodiment, the swing operation, the position stop operation, and the block stop operation are collectively referred to as the swing operation of the left and right blades 14.
 以下、上下羽根12あるいは左右羽根14の制御例を具体的に説明するが、人体検知手段により人が単一の領域にのみいると判定された場合、人体検知手段により人がいると判定された人位置判別領域の前方に位置する障害物位置判別領域に障害物があると障害物検知手段により判定された場合、上下羽根12を制御して障害物を上方から回避する気流制御を行うようにしている。また、人体検知手段により人がいると判定された人位置判別領域に属する障害物位置判別領域に障害物があると障害物検知手段により判定された場合、人がいると判定された人位置判別領域に属する少なくとも一つの障害物位置判別領域内で左右羽根14を揺動させ、揺動範囲の両端で左右羽根14の固定時間を設けない第1の気流制御と、人がいると判定された人位置判別領域あるいは当該領域に隣接する人位置判別領域に属する少なくとも一つの障害物位置判別領域内で左右羽根14を揺動させ、揺動範囲の両端で左右羽根14の固定時間を設けた第2の気流制御の一つを選択するようにしている。 Hereinafter, although the control example of the upper and lower blades 12 or the left and right blades 14 will be described in detail, when it is determined by the human body detection means that the person is only in a single region, it is determined that the human body detection means has a person. When the obstacle detection means determines that there is an obstacle in the obstacle position determination area located in front of the person position determination area, the air flow control is performed to control the upper and lower blades 12 to avoid the obstacle from above. ing. Further, when the obstacle detection means determines that there is an obstacle in the obstacle position determination area belonging to the person position determination area determined that the person is detected by the human body detection means, the person position determination is determined that there is a person. It is determined that there is a person with the first air flow control in which the left and right blades 14 are swung within at least one obstacle position determination region belonging to the region, and the fixing time of the left and right blades 14 is not provided at both ends of the swing range. The left and right blades 14 are swung within at least one obstacle position determining region belonging to the person position determining region or the human position determining region adjacent to the region, and fixed times of the left and right blades 14 are provided at both ends of the swing range. One of the two airflow controls is selected.
 また、以下の説明では、上下羽根12の制御と左右羽根14の制御を分けているが、人及び障害物の位置に応じて、上下羽根12の制御と左右羽根14の制御は適宜組み合わせて行われる。
A.上下羽根制御
 (1)領域B~Gのいずれかに人がいて、人がいる領域の前方のポジションA1~A3に障害物がある場合
 上下羽根12の設定角度を通常のフィールド風向制御(表8)に対し表11のように補正し、上下羽根12を上向き設定した気流制御を行う。
In the following description, the control of the upper and lower blades 12 and the control of the left and right blades 14 are separated, but the control of the upper and lower blades 12 and the control of the left and right blades 14 are appropriately combined depending on the position of the person and the obstacle. Is called.
A. Upper and lower blade control (1) When there is a person in any of the regions B to G and there is an obstacle at positions A1 to A3 in front of the region where the person is present, ) Is corrected as shown in Table 11, and airflow control is performed with the upper and lower blades 12 set upward.
Figure JPOXMLDOC01-appb-T000011
Figure JPOXMLDOC01-appb-T000011
 (2)領域B~Gのいずれかに人がいて、人がいる領域の前方の領域Aに障害物がない場合(上記(1)以外)
 通常自動風向制御を行う。
B.左右羽根制御
 B1.領域A(近距離)に人がいる場合
 (1)領域Aにおいて障害物のないポジションが一つの場合
 障害物のないポジションの目標設定角度を中心として左右にスイング動作させ第1の気流制御を行う。例えば、ポジションA1,A3に障害物があり、ポジションA2に障害物がない場合、ポジションA2の目標設定角度を中心として左右にスイング動作させ、基本的には障害物のないポジションA2を空調するが、ポジションA1,A3に人がいないとは限らないので、スイング動作を加えることで、ポジションA1,A3に多少でも気流が振り分けられるようにする。
(2) When there is a person in any of the areas B to G and there is no obstacle in the area A in front of the area in which the person is present (other than (1) above)
Usually performs automatic wind direction control.
B. Left and right blade control B1. When there is a person in the area A (short distance) (1) When there is one position without an obstacle in the area A The first airflow control is performed by swinging left and right around the target setting angle of the position without the obstacle . For example, when there are obstacles at positions A1 and A3 and there are no obstacles at position A2, the position A2 is basically air-conditioned by swinging left and right around the target setting angle of position A2. Since there is no guarantee that there are no people at positions A1 and A3, an air flow is distributed to positions A1 and A3 by adding a swing motion.
 さらに具体的に説明すると、表7及び表9に基づいて、ポジションA2の目標設定角度及び補正角度(スイング動作時の揺動角)は決定されるので、左羽根及び右羽根は共に10度を中心に、それぞれ±10度の角度範囲で止まることなく揺動(スイング)し続ける。ただし、左羽根と右羽根を左右に振るタイミングは同一に設定されており、左羽根と右羽根の揺動動作は連動している。 More specifically, since the target setting angle and the correction angle (swing angle during the swing operation) of the position A2 are determined based on Tables 7 and 9, both the left blade and the right blade have 10 degrees. It continues to swing (swing) without stopping in the center at an angle range of ± 10 degrees. However, the timing of swinging the left and right blades to the left and right is set to be the same, and the swinging motions of the left and right blades are linked.
 (2)領域Aにおいて障害物のないポジションが二つで、隣接している場合(A1とA2あるいはA2とA3)
  障害物のない二つのポジションの目標設定角度を両端としてスイング動作させ第1の気流制御を行うことで、基本的に障害物のないポジションを空調する。
(2) When there are two obstacle-free positions in the area A and they are adjacent (A1 and A2 or A2 and A3)
The first airflow control is performed by swinging the target setting angles of two positions without obstacles at both ends to basically air-condition a position without obstacles.
 (3)領域Aにおいて障害物のないポジションが二つで、離れている場合(A1とA3)
  障害物のない二つのポジションの目標設定角度を両端としてブロック停留稼動させ第2の気流制御を行う。
(3) When there are two positions that are not obstructed in the area A and are separated (A1 and A3)
The second airflow control is performed by operating the block stop with the target set angles of two positions having no obstacles at both ends.
 (4)領域Aにおいてすべてのポジションに障害物がある場合
  どこを狙っていいのか不明なので、ブロックNをブロック停留稼動させ第2の気流制御を行う。領域全体を狙うよりもブロック停留稼動の方が指向性のある風向となって遠くに届きやすく、障害物を回避できる可能性が高いからである。すなわち、領域Aに障害物が点在している場合でも、障害物と障害物との間には通常隙間があり、この障害物間の隙間を通して送風することができる。
(4) When there are obstacles at all positions in the area A Since it is unclear where to aim, the block N is operated in a block stop and the second airflow control is performed. This is because the block stop operation is more directional and can reach far away than the entire area, and there is a high possibility of avoiding obstacles. That is, even when obstacles are scattered in the area A, there is usually a gap between the obstacles, and the air can be blown through the gap between the obstacles.
 (5)領域Aにおいてすべてのポジションに障害物がない場合
  領域Aの通常自動風向制御を行う。
(5) When there are no obstacles in all positions in the area A The normal automatic wind direction control in the area A is performed.
 B2.領域B,C,D(中距離)のいずれかに人がいる場合
 (1)人がいる領域に属する二つのポジションの一方にのみ障害物がある場合
  障害物のないポジションの目標設定角度を中心として左右にスイング動作させ第1の気流制御を行う。例えば、領域Dに人がいて、ポジションD2にのみ障害物がある場合、ポジションD1の目標設定角度を中心として左右にスイング動作させる。
B2. When there is a person in any of the areas B, C, D (medium distance) (1) When there is an obstacle only in one of the two positions belonging to the person's area The first airflow control is performed by swinging left and right. For example, when there is a person in the region D and there is an obstacle only at the position D2, the swing operation is performed to the left and right around the target setting angle of the position D1.
 (2)人がいる領域に属する二つのポジションの両方に障害物がある場合
  人がいる領域を含むブロックをブロック停留稼動させ第2の気流制御を行う。例えば、領域Dに人がいて、ポジションD1,D2の両方に障害物がある場合、ブロックLをブロック停留稼動させる。
(2) When there are obstacles in both of the two positions belonging to the area where the person is present The block including the area where the person is present is operated to stop the block and the second air flow control is performed. For example, when there is a person in the area D and there are obstacles in both the positions D1 and D2, the block L is operated while being stopped.
 (3)人がいる領域に障害物がない場合
  人がいる領域の通常自動風向制御を行う。
(3) When there are no obstacles in an area where people are present Normal normal wind direction control is performed in areas where people are present.
 B3.領域E,F,G(遠距離)のいずれかに人がいる場合
 (1)人がいる領域の前方の中距離領域に属する二つのポジションの一方にのみ障害物がある場合(例:領域Eに人がいて、ポジションB2に障害物があり、ポジションB1に障害物がない)
  (1.1)障害物があるポジションの両隣に障害物がない場合(例:ポジションB1,C1に障害物がない)
   (1.1.1)障害物があるポジションの後方に障害物がない場合(例:ポジションE2に障害物がない)
    障害物があるポジションを中心としてポジション停留稼動させ第2の気流制御を行う。例えば、領域Eに人がいて、ポジションB2に障害物があり、その両側にも後方にも障害物がない場合、ポジションB2にある障害物を横から避けて領域Eに気流を送り込むことができる。
B3. When there is a person in any of the areas E, F, G (far distance) (1) When there is an obstacle only in one of the two positions belonging to the middle distance area in front of the area where the person is present (example: area E There is an obstacle at position B2 and there is no obstacle at position B1)
(1.1) When there are no obstacles on both sides of a position with obstacles (eg, there are no obstacles at positions B1 and C1)
(1.1.1) When there is no obstacle behind the position where there is an obstacle (eg, there is no obstacle at position E2)
The second airflow control is performed by stopping the position around the position where the obstacle is located. For example, if there is a person in the area E and there is an obstacle at the position B2, and there are no obstacles on either side of the obstacle, the air current can be sent to the area E while avoiding the obstacle at the position B2 from the side. .
   (1.1.2)障害物があるポジションの後方に障害物がある場合(例:ポジションE2に障害物がある)
    中距離領域で障害物がないポジションの目標設定角度を中心としてスイング動作させ第1の気流制御を行う。例えば、領域Eに人がいて、ポジションB2に障害物があり、その両側には障害物がないが、その後方に障害物がある場合、障害物がないポジションB1から気流を送り込むほうが有利である。
(1.1.2) When there is an obstacle behind the position where there is an obstacle (eg, there is an obstacle at position E2)
The first airflow control is performed by performing a swing operation around the target setting angle in a position where there is no obstacle in the middle distance region. For example, if there is a person in the area E and there is an obstacle at the position B2 and there are no obstacles on both sides, but there are obstacles behind it, it is advantageous to send airflow from the position B1 where there is no obstacle. .
  (1.2)障害物があるポジションの両隣のうち一方に障害物があり、他方に障害物がない場合
   障害物がないポジションの目標設定角度を中心としてスイング動作させ第1の気流制御を行う。例えば、領域Fに人がいて、ポジションC2に障害物があり、ポジションC2の両隣のうちポジションD1に障害物があり、C1に障害物がない場合、障害物がないポジションC1からポジションC2の障害物を避けて気流を領域Fに送ることができる。
(1.2) When there is an obstacle on either side of the position where there is an obstacle and there is no obstacle on the other side, the first airflow control is performed by swinging around the target setting angle of the position where there is no obstacle . For example, if there is a person in the area F, there is an obstacle in position C2, there is an obstacle in position D1 of both sides of position C2, and there is no obstacle in C1, the obstacles from position C1 to position C2 where there is no obstacle Airflow can be sent to area F while avoiding objects.
 (2)人がいる領域の前方の中距離領域に属する二つのポジションの両方に障害物がある場合
  人がいる領域を含むブロックをブロック停留稼動させ第2の気流制御を行う。例えば、領域Fに人がいて、ポジションC1,C2の両方に障害物がある場合、ブロックCをブロック停留稼動させる。この場合、人の前方に障害物があり、障害物を避けようがないので、ブロックCに隣接するブロックに障害物があるかどうかに関係なく、ブロック停留稼動を行う。
(2) When there is an obstacle in both of the two positions belonging to the middle distance area in front of the area where the person is present The block including the area where the person is present is operated in block stop to perform the second air flow control. For example, when there is a person in the area F and there are obstacles in both the positions C1 and C2, the block C is operated in a block stop state. In this case, since there is an obstacle ahead of the person and there is no way to avoid the obstacle, the block stop operation is performed regardless of whether there is an obstacle in the block adjacent to the block C.
 (3)人がいる領域の前方の中距離領域に属する二つのポジションの両方に障害物がない場合(例:領域Fに人がいて、ポジションC1,C2に障害物がない)
  (3.1)人がいる領域に属する二つのポジションの一方のポジションにのみ障害物がある場合
   障害物がない他方のポジションの目標設定角度を中心としてスイング動作させ第1の気流制御を行う。例えば、領域Fに人がいて、ポジションC1,C2,F1に障害物がなく、ポジションF2に障害物がある場合、人がいる領域Fの前方は開放されているので、遠距離の障害物を考慮して障害物のない遠距離のポジションF1を中心に空調する。
(3) When there are no obstacles in both of the two positions belonging to the middle distance area in front of the area where the person is present (example: there is a person in the area F and there are no obstacles in the positions C1 and C2)
(3.1) When there is an obstacle only in one of the two positions belonging to the area where the person is present The first airflow control is performed by swinging around the target setting angle of the other position where there is no obstacle. For example, if there is a person in the area F, there are no obstacles in the positions C1, C2, and F1, and there is an obstacle in the position F2, the front of the area F in which the person is present is open. Considering this, air conditioning is performed around the far-off position F1 without an obstacle.
  (3.2)人がいる領域に属する二つのポジションの両方に障害物がある場合
   人がいる領域を含むブロックをブロック停留稼動させ第2の気流制御を行う。例えば、領域Gに人がいて、ポジションD1,D2に障害物がなく、ポジションG1,G2の両方に障害物がある場合、人がいる領域Gの前方は開放されているが、この領域全体に障害物があり、どこを狙っていいのか不明なので、ブロックLをブロック停留稼動させる。
(3.2) When there are obstacles in both of the two positions belonging to the area where the person is present The block including the area where the person is located is put into block stop operation and the second air flow control is performed. For example, if there is a person in the area G, there are no obstacles in the positions D1 and D2, and there are obstacles in both the positions G1 and G2, the front of the area G in which the person is present is open. Since there is an obstacle and it is unclear where to aim, block L is put into block stop operation.
  (3.3)人がいる領域に属する二つのポジションの両方に障害物がない場合
   人がいる領域の通常自動風向制御を行う。
(3.3) When there are no obstacles in both of the two positions belonging to the area where people are present Normal normal wind direction control is performed in areas where people are present.
<人壁近接制御>
 人と壁が同一領域に存在する場合、人は必ず壁よりも前に位置して壁に近接していることになり、暖房時においては、壁近傍に温風が滞留しやすく、壁近傍の室温が他の部分の室温に比べて高くなる傾向にあることから、人壁近接制御を行うようにしている。
<Human wall proximity control>
When a person and a wall exist in the same area, the person is always located in front of the wall and close to the wall, and during heating, hot air tends to stay near the wall. Since the room temperature tends to be higher than the room temperature of other parts, human wall proximity control is performed.
 この制御においては、表4に示される画素[i,j]と異なる画素において視差を計算し、その距離を検知して、正面の壁と左右の壁の位置をまず認識するようにしている。 In this control, the parallax is calculated in a pixel different from the pixel [i, j] shown in Table 4, the distance is detected, and the positions of the front wall and the left and right walls are first recognized.
 すなわち、撮像センサユニット24,26を利用して、まず略水平方向の正面に対応する画素の視差を計算し、正面の壁までの距離を測定して距離番号を求める。さらに、略水平方向の左側に対応する画素の視差を計算し、左側の壁までの距離を測定して距離番号を求め、右側の壁の距離番号も同様に求める。 That is, using the image sensor units 24 and 26, first, the parallax of the pixel corresponding to the front in the substantially horizontal direction is calculated, and the distance to the front wall is measured to obtain the distance number. Further, the parallax of the pixel corresponding to the left side in the substantially horizontal direction is calculated, the distance to the left wall is measured, the distance number is obtained, and the distance number of the right wall is obtained similarly.
 さらに、図29を参照しながら詳述する。図29は、室内機が取り付けられた部屋を上から見た図であり、室内機から見て正面、左側及び右側に、正面壁WC、左壁WL、右壁WRがそれぞれ存在する場合を示している。なお、図29の左側の数字は、対応する升目の距離番号を示しており、表12は室内機から距離番号に対応する近地点及び遠地点までの距離を示している。 Further details will be described with reference to FIG. FIG. 29 is a top view of a room to which an indoor unit is attached, and shows a case where a front wall WC, a left wall WL, and a right wall WR exist on the front, left, and right sides as viewed from the indoor unit. ing. The numbers on the left side of FIG. 29 indicate the distance numbers of the corresponding cells, and Table 12 indicates the distances from the indoor unit to the near and far points corresponding to the distance numbers.
Figure JPOXMLDOC01-appb-T000012
Figure JPOXMLDOC01-appb-T000012
 上述したように、本明細書で使用する「障害物」とは、例えばテーブルやソファー等の家具、テレビ、オーディオ等を想定しており、これらの障害物の通常の高さを考えると、仰角75度の角度範囲では検知されず、検知されるのは壁であると推定できるので、本実施の形態においては、仰角75度以上で室内機の正面、左端及び右端までの距離を検知し、その位置を含む延長上に壁があるものとする。 As described above, the “obstacle” used in the present specification is assumed to be furniture such as a table and a sofa, a television, an audio, and the like. Since it is not detected in the angle range of 75 degrees and it can be estimated that it is a wall that is detected, in the present embodiment, the distance to the front, left end, and right end of the indoor unit is detected at an elevation angle of 75 degrees or more, It is assumed that there is a wall on the extension including the position.
 また、水平方向の視野角では、左壁WLは角度-80度、-75度の位置に、正面壁WCは角度-15度~15度の位置に、右壁WRは角度75度、80度の位置にそれぞれ存在するものと推定できるので、表3に示される画素のうち、仰角75度以内で前記水平方向の視野角内に対応する画素はそれぞれ次のとおりである。 In the horizontal viewing angle, the left wall WL is at the positions of −80 degrees and −75 degrees, the front wall WC is at the positions of −15 degrees to 15 degrees, and the right wall WR is at the angles of 75 degrees and 80 degrees. Therefore, among the pixels shown in Table 3, the pixels corresponding to the viewing angle in the horizontal direction within the elevation angle of 75 degrees are as follows.
 左端:[14,15]、[18,15]、[14,21]、[18,21]、[14,27]、[18,27]
 正面:[66,15]~[90,15]、[66,21]~[90,21]、[66,27]~[90,27]
 右端:[138,15]、[142,15]、[138,21]、[142,21]、[138,27]、[142,27]
 室内機から正面壁WC、左壁WL、右壁WRまでの距離番号決定に際し、表13に示されるように、まず上記各画素で壁面データを抽出する。
Left end: [14, 15], [18, 15], [14, 21], [18, 21], [14, 27], [18, 27]
Front: [66, 15] to [90, 15], [66, 21] to [90, 21], [66, 27] to [90, 27]
Right end: [138,15], [142,15], [138,21], [142,21], [138,27], [142,27]
When determining the distance numbers from the indoor unit to the front wall WC, the left wall WL, and the right wall WR, as shown in Table 13, first, wall surface data is extracted from each pixel.
Figure JPOXMLDOC01-appb-T000013
Figure JPOXMLDOC01-appb-T000013
 次に、表14に示されるように、各壁面データの上限値及び下限値を削除して不必要な壁面データを排除し、このようにして得られた壁面データを基に正面壁WC、左壁WL、右壁WRまでの距離番号を決定する。 Next, as shown in Table 14, the upper limit value and the lower limit value of each wall surface data are deleted to eliminate unnecessary wall surface data. Based on the wall surface data thus obtained, the front wall WC, left A distance number to the wall WL and the right wall WR is determined.
Figure JPOXMLDOC01-appb-T000014
Figure JPOXMLDOC01-appb-T000014
 正面壁WC、左壁WL、右壁WRまでの距離番号としては、表14における最大値(WC:5、WL:6、WR:3)を採用することができる。最大値を採用した場合、室内機から正面壁WC、左壁WL、右壁WRまでの距離が遠い部屋(大きい部屋)を空調することになり、空調制御の対象としてより広い空間を設定することができる。しかしながら、必ずしも最大値である必要はなく、平均値を採用することもできる。 As the distance numbers to the front wall WC, the left wall WL, and the right wall WR, the maximum values in Table 14 (WC: 5, WL: 6, WR: 3) can be adopted. When the maximum value is adopted, a room (large room) with a long distance from the indoor unit to the front wall WC, the left wall WL, and the right wall WR will be air-conditioned, and a wider space should be set as a target for air-conditioning control. Can do. However, it is not necessarily the maximum value, and an average value can also be adopted.
 このようにして正面壁WC、左壁WL、右壁WRまでの距離番号を決定した後、人体検知手段により人がいると判定された人位置判別領域に属する障害物位置判別領域に壁があるかどうかを障害物検知手段により判定し、壁があると判定されると、壁の前に人がいると考えられるので、暖房時においては、リモコンで設定された設定温度より低目の温度設定を行う。 After determining the distance numbers to the front wall WC, the left wall WL, and the right wall WR in this way, there is a wall in the obstacle position determination area belonging to the person position determination area determined to have a person by the human body detection means. If it is determined by the obstacle detection means and it is determined that there is a wall, it is considered that there is a person in front of the wall, so during heating, the temperature setting is lower than the setting temperature set by the remote control I do.
 以下、この人壁近接制御について具体的に説明する。
A.人が近距離領域あるいは中距離領域にいる場合
 近距離領域及び中距離領域は、室内機から近い位置にあり、領域面積も小さいので、室温が上昇する度合いが高くなることから、リモコンで設定された設定温度を第1の所定温度(例えば、2℃)だけ低目に設定する。
B.人が遠距離領域にいる場合
 遠距離領域は、室内機から遠い位置にあり、領域面積も大きいので、室温が上昇する度合いは近距離領域あるいは中距離領域より低いことから、リモコンで設定された設定温度を第1の所定温度より少ない第2の所定温度(例えば、1℃)だけ低目に設定する。
Hereinafter, the human wall proximity control will be specifically described.
A. When a person is in a short distance area or a medium distance area The short distance area and the medium distance area are close to the indoor unit, and the area of the area is small. The set temperature is set to a low value by a first predetermined temperature (for example, 2 ° C.).
B. When a person is in a long-distance area Since the long-distance area is far from the indoor unit and has a large area, the degree of increase in room temperature is lower than that in the short-distance area or medium-distance area. The set temperature is set to a low level by a second predetermined temperature (for example, 1 ° C.) lower than the first predetermined temperature.
 また、遠距離領域は領域面積が大きいので、同じ人位置判別領域に人と壁があると検知しても、人と壁が離れている可能性があるので、表15に示されるような組み合わせの場合に限り、人壁近接制御を行うようにしており、人と壁との位置関係に応じて温度シフトを行うようにしている。 Further, since the long-distance area has a large area, even if it is detected that there is a person and a wall in the same person position determination area, there is a possibility that the person and the wall are separated. Only in this case, the human wall proximity control is performed, and the temperature shift is performed according to the positional relationship between the person and the wall.
Figure JPOXMLDOC01-appb-T000015
Figure JPOXMLDOC01-appb-T000015
<壁と障害物の分離検知>
 また、壁と障害物では、撮像センサユニットが検知する際の仰角が異なることから、壁検知時と障害物検知時とで、撮像センサユニットでの検知順番が異なるように制御することで壁と障害物を効率良く正確に検知することができる。
<Wall and obstacle separation detection>
Also, since the elevation angle at the time of detection by the imaging sensor unit differs between the wall and the obstacle, the wall and the obstacle can be controlled by controlling the detection order at the imaging sensor unit to be different between the wall detection and the obstacle detection. Obstacles can be detected efficiently and accurately.
 その一例を挙げると、壁検知時には、次の順番で距離測定を行う。 For example, the distance is measured in the following order when a wall is detected.
 画素[0,0]→画素[32,0]→画素[32,1]→画素[0, 1]→画素[0,2]→画素[32,2]→壁検知終了
 一方、障害物検知時には、遠距離に相当する仰角75度~65度の範囲では、水平方向及び垂直方向をともに5度刻みで次の順番で距離測定を行う。
Pixel [0,0] → Pixel [32,0] → Pixel [32,1] → Pixel [0,1] → Pixel [0,2] → Pixel [32,2] → End of wall detection Meanwhile, obstacle detection Sometimes, in the range of elevation angle of 75 to 65 degrees corresponding to a long distance, distance measurement is performed in the following order in steps of 5 degrees in both the horizontal and vertical directions.
 画素[0,1]→画素[9,1]→画素[9,2]→画素[0,2] →画素[0,3]→画素[9,3]→1回目の遠距離計測終了
 画素[10,1]→画素[21,1]→画素[21,2]→画素[10,2]→画素[10,3]→画素[21,3]→2回目の遠距離計測終了
 画素[22,1]→画素[32,1]→画素[32,2]→画素[2 2,2]→画素[22,3]→画素[32,3]→最後の遠距離計測終了
 また、中距離に相当する仰角70度~55度の範囲では、水平方向及び垂直方向をともに5度刻みで次の順番で距離測定を行う。
Pixel [0, 1] → Pixel [9,1] → Pixel [9,2] → Pixel [0,2] → Pixel [0,3] → Pixel [9,3] → End of first long distance measurement Pixel [10, 1] → pixel [21,1] → pixel [21,2] → pixel [10,2] → pixel [10,3] → pixel [21,3] → second long distance measurement end pixel [ 22,1] → pixel [32,1] → pixel [32,2] → pixel [22,2] → pixel [22,3] → pixel [32,3] → end of the last long-distance measurement In an elevation angle range of 70 to 55 degrees corresponding to the distance, distance measurement is performed in the following order in increments of 5 degrees in both the horizontal and vertical directions.
 画素[0,2]→画素[9,2]→画素[9,3]→画素[0,3] →画素[0,4]→画素[9,4]→画素[9,5]→画素[0,5] →1回目の中距離計測終了
 画素[10,2]→画素[21,2]→画素[21,3]→画素[1 0,3]→画素[10,4]→画素[21,4]→画素[21,5]→画素[10,5]→2回目の中距離計測終了
 画素[22,2]→画素[32,2]→画素[32,3]→画素[2 2,3]→画素[22,4]→画素[32,4]→画素[32,5]→画素[22,5]→最後の中距離計測終了
 さらに、近距離に相当する仰角60度~20度の範囲では、水平方向及び垂直方向をともに10度刻みで次の順番で距離測定を行う。
Pixel [0,2] → Pixel [9,2] → Pixel [9,3] → Pixel [0,3] → Pixel [0,4] → Pixel [9,4] → Pixel [9,5] → Pixel [0, 5] → End of first intermediate distance measurement Pixel [10, 2] → Pixel [21, 2] → Pixel [21, 3] → Pixel [10, 3] → Pixel [10, 4] → Pixel [21, 4] → pixel [21,5] → pixel [10,5] → second middle distance measurement end pixel [22,2] → pixel [32,2] → pixel [32,3] → pixel [ 2 2, 3] → pixel [22,4] → pixel [32,4] → pixel [32,5] → pixel [22,5] → end of final middle distance measurement Further, elevation angle corresponding to short distance is 60 degrees In the range of ˜20 degrees, the distance is measured in the following order in increments of 10 degrees in both the horizontal and vertical directions.
 画素[0,4]→画素[9,4]→画素[9,6]→画素[0,6]→画素[0,8]→画素[9,8]→画素[9,10]→画素[0,1 0]→画素[0,12]→画素[9,12]→1回目の近距離計測終了
 画素[10,4]→画素[21,4]→画素[21,6]→画素[1 0,6]→画素[10,8]→画素[21,8]→画素[21,10]→画素[10,10]→画素[10,12]→画素[21,12]→2回目の近距離計測終了
 画素[22,4]→画素[32,4]→画素[32,6]→画素[22,6]→画素[22,8]→画素[32,8]→画素[32,10]→画素[22,10]→画素[22,12]→画素[32,12]→最後の近距離計測終了
 すなわち、遠距離に対応する各画素(仰角α:75度~65度)及び中距離に対応する各画素(仰角α:70度~55度)では垂直方向を5度刻みに走査する一方、近距離に対応する各画素(仰角α:60度~20度)では垂直方向を10度刻みに走査することにより、各領域における検知セル数が略等しくなり、壁と障害物の分離検知を効率よく行うことができる。
Pixel [0, 4] → Pixel [9,4] → Pixel [9,6] → Pixel [0,6] → Pixel [0,8] → Pixel [9,8] → Pixel [9,10] → Pixel [0, 10] → pixel [0,12] → pixel [9,12] → end of first short distance measurement pixel [10,4] → pixel [21,4] → pixel [21,6] → pixel [10, 6] → pixel [10,8] → pixel [21,8] → pixel [21,10] → pixel [10,10] → pixel [10,12] → pixel [21,12] → 2 End of the short distance measurement for the second time Pixel [22, 4] → Pixel [32,4] → Pixel [32,6] → Pixel [22,6] → Pixel [22,8] → Pixel [32,8] → Pixel [ 32,10] → pixel [22,10] → pixel [22,12] → pixel [32,12] → end of the last short distance measurement That is, each pixel corresponding to a long distance (elevation angle α: 75 degrees to 65 degrees) ) And each pixel corresponding to a medium distance (elevation angle α: 70 ° to 55 °) is scanned in 5 ° increments, while each pixel corresponding to a short distance (elevation angle α: 60 ° to 20 °) is vertical. By scanning the direction in increments of 10 degrees, the number of detection cells in each region becomes substantially equal, and separation detection of walls and obstacles can be performed efficiently.
 なお、本実施の形態においては、距離検知手段としてのステレオ法を採用したが、ステレオ法に代えて、投光部28と撮像センサユニット24を利用した手法を採用することもできる。この手法について説明する。 In the present embodiment, the stereo method is used as the distance detection means, but a method using the light projecting unit 28 and the image sensor unit 24 may be used instead of the stereo method. This method will be described.
 図30に示されるように、本実施形態の本体2には、撮像センサユニット24および投光部28を有する。投光部28は、光源と走査部からなり(図示せず)、光源は、LEDやレーザーを利用すればよい。また、走査部はガルバノミラーなどを利用し、投光方向を任意に変化させることができる。図31は、撮像センサユニット24と投光部28の関係を示した模式図である。本来、投光方向は2自由度、撮像面は縦横の2次元平面であるが、説明を簡略化するために、投影方向1自由度、撮像面は横方向のみの直線として考える。ここで、投光部28は、撮像センサユニット24の光軸方向に対して、投光方向ρで光を投光する。撮像センサユニット24は、投光部28が投光する直前のフレーム画像と投光中のフレーム画像の差分処理を行なうことにより、投光部28が投光した光を反射している点Pの画像上でのu座標u1を取得する。撮像センサユニット24から点Pまでの距離をXとすると、以下の関係が成り立つ。 As shown in FIG. 30, the main body 2 of the present embodiment includes an image sensor unit 24 and a light projecting unit 28. The light projecting unit 28 includes a light source and a scanning unit (not shown), and the light source may use an LED or a laser. Further, the scanning unit can change the light projecting direction arbitrarily using a galvanometer mirror or the like. FIG. 31 is a schematic diagram showing the relationship between the image sensor unit 24 and the light projecting unit 28. Originally, the projection direction is a two-degree-of-freedom and the imaging surface is a vertical and horizontal two-dimensional plane. Here, the light projecting unit 28 projects light in the light projecting direction ρ with respect to the optical axis direction of the imaging sensor unit 24. The image sensor unit 24 performs a difference process between the frame image immediately before the light projecting unit 28 projects light and the frame image being projected, thereby reflecting the light P projected by the light projecting unit 28. The u coordinate u1 on the image is acquired. When the distance from the image sensor unit 24 to the point P is X, the following relationship is established.
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
 よって、
Figure JPOXMLDOC01-appb-M000010
Therefore,
Figure JPOXMLDOC01-appb-M000010
 つまり、投光部28の投光方向ρを変化させながら、その光の反射点Pを検出することにより、空調空間内の距離情報を得ることができる。 That is, distance information in the air-conditioned space can be obtained by detecting the reflection point P of the light while changing the light projecting direction ρ of the light projecting unit 28.
 表16におけるi及びjは、投光部28が走査すべきアドレスを示しており、垂直方向の角度及び水平方向の角度は、上述した仰角α及び室内機から見て正面の基準線から右向きに測定した角度βをそれぞれ示している。すなわち、室内機から見て、垂直方向に5度~80度、水平方向に-80度~80度の範囲で各アドレスを設定し、投光部28は各アドレスを計測し、居住空間を走査する。 In Table 16, i and j indicate addresses to be scanned by the light projecting unit 28, and the vertical angle and the horizontal angle are set to the right from the elevation angle α and the reference line in front as viewed from the indoor unit. Each measured angle β is shown. That is, when viewed from the indoor unit, each address is set in the range of 5 to 80 degrees in the vertical direction and -80 to 80 degrees in the horizontal direction, and the light projecting unit 28 measures each address and scans the living space. To do.
Figure JPOXMLDOC01-appb-T000016
Figure JPOXMLDOC01-appb-T000016
 次に、障害物までの距離測定について、図32のフローチャートを参照しながら説明する。なお、図32のフローチャートは図25のフローチャートと極めて類似しているので、異なるステップのみ以下説明する。 Next, distance measurement to the obstacle will be described with reference to the flowchart of FIG. 32 is very similar to the flowchart of FIG. 25, only different steps will be described below.
 まずステップS48において、投光部28が投光を行なうアドレス[i,j]に対応する領域(図13に示される領域A~Gのいずれか)に人がいないと判定された場合には、ステップS49に移行する一方、人がいると判定された場合には、ステップS43に移行する。すなわち、人は障害物ではないので、人がいると判定された領域に対応する画素では、距離測定を行うことなく以前の距離データを使用し(距離データを更新しない)、人がいないと判定された領域に対応する画素においてのみ距離測定を行い、新たに測定した距離データを使用する(距離データを更新する)ように設定する。 First, in step S48, when it is determined that there is no person in the area (any one of areas A to G shown in FIG. 13) corresponding to the address [i, j] where the light projecting unit 28 performs light projection, If it is determined that there is a person while the process proceeds to step S49, the process proceeds to step S43. That is, since the person is not an obstacle, the pixel corresponding to the area determined to have a person uses the previous distance data without performing distance measurement (does not update the distance data) and determines that there is no person. The distance measurement is performed only in the pixel corresponding to the region thus set, and the newly measured distance data is set to be used (distance data is updated).
 ステップS49において、前述の投光処理と反射点を撮像センサユニット24から取得することにより、障害物までの距離を推定する。もちろん、前述のように、距離番号確定処理を利用して、距離番号を利用した処理を行なえばよい。 In step S49, the distance to the obstacle is estimated by acquiring the above-described light projection process and the reflection point from the image sensor unit 24. Of course, as described above, the distance number determination process may be used to perform the process using the distance number.
 また、距離検知手段として人体検知手段を利用してもかまわない。これは、人体検知手段を利用した人体距離検出手段と、人体検知手段を利用した障害物検出手段からなる。この処理について説明する。 Also, human body detection means may be used as distance detection means. This comprises a human body distance detecting means using human body detecting means and an obstacle detecting means using human body detecting means. This process will be described.
 図33に示されるように、本実施形態の本体2には、単一の撮像センサユニット24を有する。また、図34は、人体検知手段を利用した人体距離検出手段の処理の流れを示したフローチャートである。この図において、図5と同じステップに関しては、同一の符号を付しており、ここではその詳細な説明を省略する。 33, the main body 2 of this embodiment has a single image sensor unit 24. FIG. 34 is a flowchart showing the flow of processing of the human body distance detecting means using the human body detecting means. In this figure, the same steps as those in FIG. 5 are denoted by the same reference numerals, and detailed description thereof is omitted here.
 ステップS201において、人体距離検出手段は、前記人体検知手段が領域分割を行なった各領域において、差分が生じている画素のうち、最も画像上部に存在する画素を検出し、そのv座標をv1として取得する。 In step S201, the human body distance detection means detects a pixel that is present at the top of the image among the pixels in which the difference is generated in each area where the human body detection means has divided the area, and sets the v coordinate as v1. get.
 さらに、ステップS202において、人体距離検出手段は、最も画像上部のv座標であるv1を利用して、撮像センサユニットから人物までの距離を推定する。図35は、この処理を説明するための模式図である。図35(a)はカメラ近傍と遠方に二人の人物121,122が存在するシーンの模式図、図35(b)は撮像センサユニットが図35(a)のシーンにおいて撮像した画像の差分画像を示している。また、差分が生じている領域123,124は、それぞれ、人物121,122に対応する。ここで、人物の身長h1が既知であり、空調空間内のすべての人物の身長はほぼ等しいとする。前述のように、撮像センサユニット24は2mの高さに設置されているため、図35(a)に示すように、撮像センサユニットは、人物の上部から見下ろして撮像を行なう。このとき、人物が撮像センサユニットに近ければ近いほど、図35(b)に示すように、人物は画像上の下部に撮像される。すなわち、人物の最も画像上部のv座標v1と撮像センサユニットから人物までの距離は1対1に対応する。このことから、人物の最も上部のv座標v1と撮像センサユニットから人物までの距離の対応を事前に求めておくことにより、人体検知手段を利用した人体距離検出手段を行なうことができる。表17は、人物の平均身長をh1として利用し、人物の最も画像上部のv座標v1と撮像センサユニットから人物までの距離の対応を事前に求めた例である。ここでは、撮像センサユニットとして、VGAの解像度を有する撮像センサユニットを利用した。この表より、例えば、v1=70であった場合、撮像センサユニット24から人物までの距離は、およそ2mであると推測される。 Further, in step S202, the human body distance detection means estimates the distance from the image sensor unit to the person using v1, which is the v coordinate at the top of the image. FIG. 35 is a schematic diagram for explaining this process. FIG. 35A is a schematic diagram of a scene in which two persons 121 and 122 are present near and far from the camera, and FIG. 35B is a difference image of images captured by the image sensor unit in the scene of FIG. Is shown. Moreover, the areas 123 and 124 where the difference occurs correspond to the persons 121 and 122, respectively. Here, it is assumed that the height h1 of the person is known and the heights of all the persons in the air-conditioned space are substantially equal. As described above, since the image sensor unit 24 is installed at a height of 2 m, as shown in FIG. 35A, the image sensor unit captures an image while looking down from above the person. At this time, the closer the person is to the image sensor unit, the more the person is imaged in the lower part of the image as shown in FIG. That is, the v-coordinate v1 at the top of the image of the person and the distance from the image sensor unit to the person correspond one-to-one. From this, the human body distance detecting means using the human body detecting means can be performed by obtaining in advance the correspondence between the uppermost v coordinate v1 of the person and the distance from the imaging sensor unit to the person. Table 17 shows an example in which the average height of a person is used as h1, and the correspondence between the v-coordinate v1 at the top of the image and the distance from the imaging sensor unit to the person is obtained in advance. Here, an image sensor unit having VGA resolution is used as the image sensor unit. From this table, for example, when v1 = 70, the distance from the image sensor unit 24 to the person is estimated to be about 2 m.
Figure JPOXMLDOC01-appb-T000017
Figure JPOXMLDOC01-appb-T000017
 次に、人体検知手段を利用した障害物検出手段について説明する。 Next, the obstacle detection means using the human body detection means will be described.
 図36は、人体検知手段を利用した障害物検出手段の処理の流れを示したフローチャートである。 FIG. 36 is a flowchart showing the flow of processing of obstacle detection means using human body detection means.
 ステップS203において、障害物検出手段は、前記人体距離検出手段が推定した撮像センサユニット24から人物までの距離情報を利用して、画像上での人物の高さv2を推定する。図37は、この処理を説明するための模式図であり、図35と同様のシーンを示した模式図である。ここで、前述のように人物の身長h1が既知であり、空調空間内のすべての人物の身長はほぼ等しいとする。前述のように、撮像センサユニット24は2mの高さに設置されているため、図34(a)に示すように、撮像センサユニットは、人物の上部から見下ろして撮像を行なう。このとき、人物が撮像センサユニット24に近ければ近いほど、図34(b)に示すように、人物の画像上での大きさは大きくなる。すなわち、人物の最も画像上部のv座標と最も画像下部のv座標との差v2は、撮像センサユニット24から人物までの距離に対して、1対1に対応する。このことから、撮像センサユニットから人物までの距離がわかっている場合に、その画像上での大きさを推定することができる。これは、人物の最も画像上部のv座標と最も画像下部のv座標との差v2と撮像センサユニットから人物までの距離の対応を事前に求めておけばよい。表18は、人物の最も画像上部のv座標v1、人物の最も画像上部のv座標と最も画像下部のv座標との差v2と撮像センサユニット24から人物までの距離の対応を事前に求めた例である。ここでは、撮像センサユニットとして、VGAの解像度を有する撮像センサユニットを利用した。この表より、例えば、撮像センサユニット24から人物までの距離が2mであった場合、人物の最も画像上部のv座標と最も画像下部のv座標との差v2=85であると推測される。 In step S203, the obstacle detection means estimates the height v2 of the person on the image using the distance information from the image sensor unit 24 to the person estimated by the human body distance detection means. FIG. 37 is a schematic diagram for explaining this processing, and is a schematic diagram showing a scene similar to FIG. Here, it is assumed that the height h1 of the person is known as described above, and the heights of all persons in the air-conditioned space are substantially equal. As described above, since the image sensor unit 24 is installed at a height of 2 m, as shown in FIG. 34 (a), the image sensor unit performs imaging while looking down from above the person. At this time, the closer the person is to the image sensor unit 24, the larger the size of the person on the image as shown in FIG. That is, the difference v2 between the v-coordinate at the top of the image and the v-coordinate at the bottom of the image has a one-to-one correspondence with the distance from the image sensor unit 24 to the person. From this, when the distance from the image sensor unit to the person is known, the size on the image can be estimated. This can be done by obtaining in advance the correspondence between the difference v2 between the v-coordinate at the top of the image and the v-coordinate at the bottom of the image and the distance from the image sensor unit to the person. Table 18 obtained in advance the correspondence between the v coordinate v1 of the person at the top of the image, the difference v2 between the v coordinate of the person at the top of the image and the v coordinate of the bottom of the image, and the distance from the imaging sensor unit 24 to the person. It is an example. Here, an image sensor unit having VGA resolution is used as the image sensor unit. From this table, for example, when the distance from the image sensor unit 24 to the person is 2 m, it is estimated that the difference v2 = 85 between the v coordinate of the uppermost image and the v coordinate of the lowermost image of the person.
Figure JPOXMLDOC01-appb-T000018
Figure JPOXMLDOC01-appb-T000018
 ステップS204において、障害物検出手段は、差分画像の各領域において、最も画像上部に存在する差分が生じている画素と最も画像下部に存在する差分が生じている画素を検出し、そのv座標の差v3を計算する。 In step S204, the obstacle detection means detects the pixel having the highest difference at the top of the image and the pixel having the lowest difference at the bottom of the image in each region of the difference image. The difference v3 is calculated.
 ステップS205において、撮像センサユニット24から人物までの距離情報を利用して推定した画像上での人物の高さv2と、実際の差分画像から求めた人物の高さv3を比較することで、撮像センサユニット24と人物の間に障害物が存在するかどうかを推定する。図38、図39は、この処理を説明するための模式図である。図38は、図35と同様のシーンを示しており、撮像センサユニット24と人物の間に障害物が存在しないシーンを示した模式図である。一方、図39は障害物が存在するシーンを示した模式図である。図38において、撮像センサユニットと人物の間に障害物が存在しない場合、撮像センサユニット24から人物までの距離情報を利用して推定した画像上での人物の高さv2と、実際の差分画像から求めた人物123の高さv3はほぼ等しくなる。一方、図39において、撮像センサユニット24と人物の間に障害物が存在する場合、人物の一部が遮蔽されてしまい、遮蔽された領域は差分が存在しなくなる。ここで、空調空間内の遮蔽物は、ほとんどのものが床に置かれていることに着目すると、人物の下部領域が遮蔽されると考えられる。このことは、人物領域の最も画像上部のv座標であるv1を利用して人物までの距離を求めた場合、もし、撮像センサユニット24と人物の間に障害物が存在したとしても、距離は正確に求まることを示している。一方、もし、撮像センサユニット24と人物の間に障害物が存在する場合、実際の差分画像から求めた人物125の高さv3は、撮像センサユニット24と人物までの距離情報を利用して推定した画像上での人物の高さv2に比べ、小さくなると推測される。そのため、ステップS205において、v3がv2に比べ十分に小さいと判断された場合、ステップS206に移行し、撮像センサユニットと人物の間に障害物があると判断する。この際、撮像センサユニットと障害物との距離は、最も上部のv座標v1から求めた撮像センサユニットから人物までの距離に等しいとする。 In step S205, the person's height v2 on the image estimated using the distance information from the image sensor unit 24 to the person is compared with the person's height v3 obtained from the actual difference image, thereby capturing an image. It is estimated whether there is an obstacle between the sensor unit 24 and the person. 38 and 39 are schematic diagrams for explaining this process. FIG. 38 shows a scene similar to FIG. 35, and is a schematic diagram showing a scene where no obstacle exists between the image sensor unit 24 and a person. On the other hand, FIG. 39 is a schematic diagram showing a scene where an obstacle exists. In FIG. 38, when there is no obstacle between the image sensor unit and the person, the height v2 of the person on the image estimated using the distance information from the image sensor unit 24 to the person and the actual difference image The height v3 of the person 123 obtained from the above becomes substantially equal. On the other hand, in FIG. 39, when there is an obstacle between the image sensor unit 24 and a person, a part of the person is shielded, and there is no difference between the shielded areas. Here, considering that most of the shielding objects in the air-conditioned space are placed on the floor, it is considered that the lower area of the person is shielded. This means that if the distance to the person is obtained using v1 that is the v coordinate at the top of the image in the person area, even if an obstacle exists between the image sensor unit 24 and the person, the distance is It shows that it is obtained accurately. On the other hand, if there is an obstacle between the image sensor unit 24 and the person, the height v3 of the person 125 obtained from the actual difference image is estimated using the distance information between the image sensor unit 24 and the person. It is estimated that it is smaller than the height v2 of the person on the image. Therefore, when it is determined in step S205 that v3 is sufficiently smaller than v2, the process proceeds to step S206, and it is determined that there is an obstacle between the imaging sensor unit and the person. At this time, the distance between the imaging sensor unit and the obstacle is assumed to be equal to the distance from the imaging sensor unit to the person obtained from the uppermost v coordinate v1.
 以上のように、人体検知手段による検知結果を利用して、距離検知手段を実現する。 As described above, the distance detection means is realized by using the detection result of the human body detection means.
 また、本実施の形態においては、細分化した人位置判別領域及び障害物位置判別領域が設けられ、検知された壁と人の位置関係に応じて室内機の吸込温度が制御されていた。しかし、細分化された人位置判別領域及び障害物位置判別領域を用いずとも、空調すべき領域における人及び壁の位置を検出する手法は周知である。この検出された人及び壁の位置から、人と壁とが所定距離未満(以下)であるか否かを判定し、肯定的な判定が得られた場合に室内機の吸込温度が制御されても構わない。 Further, in the present embodiment, a subdivided human position determination area and an obstacle position determination area are provided, and the suction temperature of the indoor unit is controlled according to the detected positional relationship between the wall and the person. However, a technique for detecting the positions of people and walls in the area to be air-conditioned without using the subdivided person position determination area and obstacle position determination area is well known. From this detected position of the person and the wall, it is determined whether the person and the wall are less than a predetermined distance (or less), and if an affirmative determination is obtained, the suction temperature of the indoor unit is controlled. It doesn't matter.
 本発明に係る空気調和機は、快適な空調空間を実現しつつ省エネ運転が可能という効果を奏し、一般家庭用の空気調和機を含む様々な空気調和機として有用である。 The air conditioner according to the present invention has the effect of enabling energy-saving operation while realizing a comfortable air-conditioned space, and is useful as various air conditioners including general home air conditioners.
2 室内機本体、 2a 前面開口部、 2b 上面開口部、
4 可動前面パネル、 6 熱交換器、 8 室内ファン、
10 吹出口、 12 上下羽根、 14 左右羽根、
16 フィルタ、 18,20 前面パネル用アーム、
24,26 撮像センサユニット、 28 投光部。
2 indoor unit body, 2a front opening, 2b top opening,
4 movable front panel, 6 heat exchanger, 8 indoor fan,
10 air outlets, 12 top and bottom blades, 14 left and right blades,
16 Filter, 18, 20 Front panel arm,
24, 26 Image sensor unit, 28 Projection unit.

Claims (10)

  1. 室内機に、障害物の有無を検知する障害物検知手段を備えた撮像装置を設け、前記障害物検知手段の検知信号に基づいて前記室内機に設けられた風向変更手段を制御する空気調和機であって、
    前記撮像装置は、障害物の有無判定を、室内の状況に応じて行うことを特徴とする空気調和機。
    An air conditioner in which an indoor unit is provided with an imaging device including an obstacle detection unit that detects the presence or absence of an obstacle, and controls a wind direction changing unit provided in the indoor unit based on a detection signal of the obstacle detection unit Because
    The air conditioner according to claim 1, wherein the imaging apparatus performs the presence / absence determination of an obstacle according to an indoor situation.
  2. 空調すべき領域を複数の障害物位置判別領域に分割し、前記撮像装置は、各障害物位置判別領域における障害物の有無判定を、空調すべき領域の周囲に存在する壁とは区別して行うようにしたことを特徴とする請求項1に記載の空気調和機。 The area to be air-conditioned is divided into a plurality of obstacle position determination areas, and the imaging apparatus performs the obstacle presence / absence determination in each obstacle position determination area separately from the walls existing around the area to be air-conditioned. The air conditioner according to claim 1, which is configured as described above.
  3. 前記複数の障害物位置判別領域のうち、正面の壁手前に相当する障害物位置判別領域は、前記撮像装置による障害物検知から除外されることを特徴とする請求項2に記載の空気調和機。 The air conditioner according to claim 2, wherein, among the plurality of obstacle position determination areas, an obstacle position determination area corresponding to the front wall is excluded from obstacle detection by the imaging device. .
  4. 前記撮像装置は、人の在否を検知する人体検知手段を兼ね備え、
    前記人体検知手段の検知信号及び前記障害物検知手段の検知信号に基づいて前記室内機に設けられた風向変更手段を制御するものとし、
    前記人体検知手段による人の検知結果が否の場合、前記人体検知手段により障害物の有無を検知し、前記人体検知手段による人の検知結果が在の場合、前記人体検知手段により障害物の有無を検知しないことを特徴とする、請求項1に記載の空気調和機。
    The imaging device also has human body detection means for detecting the presence or absence of a person,
    Based on the detection signal of the human body detection means and the detection signal of the obstacle detection means, the wind direction changing means provided in the indoor unit is controlled,
    When the human detection result by the human body detection means is negative, the human body detection means detects the presence or absence of an obstacle, and when the human detection result by the human body detection means is present, the human body detection means detects the presence or absence of an obstacle. The air conditioner according to claim 1, wherein the air conditioner is not detected.
  5. 空調すべき領域が、前記人体検知手段により検知される複数の人位置判別領域に区分されるとともに、前記障害物検知手段により検知される複数の障害物位置判別領域に区分されており、前記各障害物位置判別領域に対応する人位置判別領域における前記人体検知手段による人の在否判定結果に応じて、前記人体検知手段により障害物の有無を検知するか否かが判定される、請求項4に記載の空気調和機。 The area to be air-conditioned is divided into a plurality of human position determination areas detected by the human body detection means, and is divided into a plurality of obstacle position determination areas detected by the obstacle detection means, The human body detection unit determines whether or not to detect the presence or absence of an obstacle according to a human presence determination result by the human body detection unit in the human position determination region corresponding to the obstacle position determination region. 4. The air conditioner according to 4.
  6. 室内機に、人の在否を検知する人体検知手段と、障害物の有無を検知する障害物検知手段とを備え、前記人体検知手段の検知信号及び前記障害物検知手段の検知信号に基づいて前記室内機に設けられた風向変更手段を制御する空気調和機であって、
    前記人体検知手段と前記障害物検知手段は撮像装置により実現され、
    空調すべき領域を、前記人体検知手段により検知される複数の人位置判別領域に区分するとともに、前記障害物検知手段により検知される複数の障害物位置判別領域に区分し、前記障害物検知手段により各障害物位置判別領域において障害物の有無判定を行うに際し、前記人体検知手段により人がいないと判定された人位置判別領域に属する障害物位置判別領域においては、前記障害物検知手段の判定結果を更新する一方、前記人体検知手段により人がいると判定された人位置判別領域に属する障害物位置判別手段においては、前記障害物検知手段の判定結果を更新しないことを特徴とする、請求項1に記載の空気調和機。
    The indoor unit includes human body detection means for detecting the presence or absence of a person and obstacle detection means for detecting the presence or absence of an obstacle, based on the detection signal of the human body detection means and the detection signal of the obstacle detection means An air conditioner for controlling wind direction changing means provided in the indoor unit,
    The human body detection means and the obstacle detection means are realized by an imaging device,
    The area to be air-conditioned is divided into a plurality of human position determination areas detected by the human body detection means, and is divided into a plurality of obstacle position determination areas detected by the obstacle detection means, and the obstacle detection means When determining the presence or absence of an obstacle in each obstacle position determination area, the determination of the obstacle detection means in the obstacle position determination area belonging to the human position determination area determined by the human body detection means While the result is updated, the obstacle position determination means belonging to the human position determination area determined to have a person by the human body detection means does not update the determination result of the obstacle detection means. Item 2. An air conditioner according to Item 1.
  7. 前記人体検知手段により人がいないと判定された人位置判別領域に属する障害物位置判別領域においては、前記障害物検知手段による前回の判定結果を今回の判定結果で更新する一方、前記人体検知手段により人がいると判定された人位置判別領域に属する障害物位置判別領域においては、前記障害物検知手段による前回の判定結果を今回の判定結果で更新しないことを特徴とする、請求項6に記載の空気調和機。 In the obstacle position determination area belonging to the human position determination area determined that there is no person by the human body detection means, the previous determination result by the obstacle detection means is updated with the current determination result, while the human body detection means 7. The obstacle position determination area belonging to the person position determination area determined to have a person according to claim 6, wherein the previous determination result by the obstacle detection means is not updated with the current determination result. The air conditioner described.
  8. 前記各障害物位置判別領域にデフォルト値を設定し、前記障害物検知手段による前記各障害物位置判別領域における判定が初回の場合には、前記前回の判定結果に代えて前記デフォルト値を使用することを特徴とする、請求項7に記載の空気調和機。 A default value is set in each obstacle position determination area, and when the determination in each obstacle position determination area by the obstacle detection means is the first time, the default value is used instead of the previous determination result. The air conditioner according to claim 7, wherein
  9. 空気調和機の運転開始時には、前記複数の障害物位置判別領域のうち所定の障害物位置判別領域のみの障害物有無判定を行い、前記人体検知手段による人の在否判定結果に関係なく、前記障害物検知手段の判定結果を更新することを特徴とする、請求項7又は8に記載の空気調和機。 At the start of operation of the air conditioner, the presence / absence determination of only a predetermined obstacle position determination area among the plurality of obstacle position determination areas is performed, regardless of the result of the presence / absence determination of the person by the human body detection means, The air conditioner according to claim 7 or 8, wherein the determination result of the obstacle detection means is updated.
  10. 前記所定の障害物判別領域が、前記障害物検知手段からの俯角が所定の角度以下であることを特徴とする請求項9に記載の空気調和機。 The air conditioner according to claim 9, wherein the predetermined obstacle discrimination area has a depression angle from the obstacle detection means equal to or smaller than a predetermined angle.
PCT/JP2010/005883 2009-10-06 2010-09-30 Air conditioner WO2011043039A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009232360A JP5487869B2 (en) 2009-10-06 2009-10-06 Air conditioner
JP2009-232360 2009-10-06

Publications (1)

Publication Number Publication Date
WO2011043039A1 true WO2011043039A1 (en) 2011-04-14

Family

ID=43856523

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/005883 WO2011043039A1 (en) 2009-10-06 2010-09-30 Air conditioner

Country Status (2)

Country Link
JP (1) JP5487869B2 (en)
WO (1) WO2011043039A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5697583B2 (en) * 2011-11-21 2015-04-08 三菱電機株式会社 Room shape recognition method and apparatus, and air conditioner using the same
CN112665160B (en) * 2020-12-21 2022-01-28 珠海格力电器股份有限公司 Control method of air conditioner and air conditioner

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0372249U (en) * 1989-11-16 1991-07-22
JP2008224099A (en) * 2007-03-12 2008-09-25 Mitsubishi Electric Corp Air conditioning device
JP2008304083A (en) * 2007-06-05 2008-12-18 Nikon Corp Air conditioning device
JP2009092281A (en) * 2007-10-05 2009-04-30 Mitsubishi Electric Building Techno Service Co Ltd Air-conditioning control system
JP2009139010A (en) * 2007-12-06 2009-06-25 Sharp Corp Air conditioner
JP2009186136A (en) * 2008-02-08 2009-08-20 Panasonic Corp Air conditioner

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0372249U (en) * 1989-11-16 1991-07-22
JP2008224099A (en) * 2007-03-12 2008-09-25 Mitsubishi Electric Corp Air conditioning device
JP2008304083A (en) * 2007-06-05 2008-12-18 Nikon Corp Air conditioning device
JP2009092281A (en) * 2007-10-05 2009-04-30 Mitsubishi Electric Building Techno Service Co Ltd Air-conditioning control system
JP2009139010A (en) * 2007-12-06 2009-06-25 Sharp Corp Air conditioner
JP2009186136A (en) * 2008-02-08 2009-08-20 Panasonic Corp Air conditioner

Also Published As

Publication number Publication date
JP2011080663A (en) 2011-04-21
JP5487869B2 (en) 2014-05-14

Similar Documents

Publication Publication Date Title
JP5402488B2 (en) Air conditioner
JP5454065B2 (en) Air conditioner
WO2011043054A1 (en) Air conditioner
JP5402487B2 (en) Air conditioner
JP2011080621A (en) Air conditioner
JP6335425B2 (en) Air conditioner
JP5819271B2 (en) Air conditioner
JP5815490B2 (en) Air conditioner
JP5697583B2 (en) Room shape recognition method and apparatus, and air conditioner using the same
JP2013024534A (en) Situation recognition device
JP2012042074A (en) Air conditioner
JP5488297B2 (en) Air conditioner
JP2010169373A (en) Air conditioner
JP5487867B2 (en) Air conditioner
JP2015052431A (en) Indoor unit of air conditioner, and air conditioner
JP2017053603A (en) Air conditioner
JP2015190666A (en) Indoor machine of air conditioning machine, and air conditioning machine using the same
JP2012037102A (en) Device and method for identifying person and air conditioner with person identification device
JP5487869B2 (en) Air conditioner
JP2011080685A (en) Air conditioner
JP2010286208A (en) Air conditioner
JP2016044863A (en) Air conditioner
KR102223178B1 (en) Air conditioner and method for controlling the same
JP6692134B2 (en) Air conditioner
JP2014081144A (en) Air conditioner

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10821716

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10821716

Country of ref document: EP

Kind code of ref document: A1