WO2006077654A1 - Elevator - Google Patents

Elevator Download PDF

Info

Publication number
WO2006077654A1
WO2006077654A1 PCT/JP2005/001029 JP2005001029W WO2006077654A1 WO 2006077654 A1 WO2006077654 A1 WO 2006077654A1 JP 2005001029 W JP2005001029 W JP 2005001029W WO 2006077654 A1 WO2006077654 A1 WO 2006077654A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
car
information
building
display image
Prior art date
Application number
PCT/JP2005/001029
Other languages
French (fr)
Inventor
Aernoud Brouwers
Original Assignee
Mitsubishi Denki Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Denki Kabushiki Kaisha filed Critical Mitsubishi Denki Kabushiki Kaisha
Priority to EP05704139A priority Critical patent/EP1838605A4/en
Priority to PCT/JP2005/001029 priority patent/WO2006077654A1/en
Priority to CNA2005800105411A priority patent/CN1942384A/en
Publication of WO2006077654A1 publication Critical patent/WO2006077654A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B3/00Applications of devices for indicating or signalling operating conditions of elevators
    • B66B3/002Indicators
    • B66B3/008Displaying information not related to the elevator, e.g. weather, publicity, internet or TV
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/34Details, e.g. call counting devices, data transmission from car to control system, devices giving information to the control system
    • B66B1/3492Position or motion detectors or driving means for the detector
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B3/00Applications of devices for indicating or signalling operating conditions of elevators

Definitions

  • This invention relates to an elevator in which a car and a counterweight ascend and descend in an elevator shaft .
  • Japanese Patent No .3484731 describes a couple of solutions to create such an external image .
  • One solution described is to have one camera per elevator car on the facade that moves synchronous with the elevator . This means that on high rise buildings the camera has to travel along the entire facade of the building at the same (high) speed as the elevator . It is basically a separate elevator system for the camera .
  • the second solution describes a camera that is fixed to the facade of the building at a- height that is equal to half the travel height of the elevator car . To provide the image for the complete travel of the elevator car, the camera changes its viewing angle in vertical direction .
  • each elevator needs a separate camera system. In case the cameras move along the facade of the buildings , a lot of guide systems need to be provided that can overlap windows, etc .
  • Moving the camera by means of a guide system also means the guide system must be perfectly straight to avoid vibration of the camera, because when the camera vibrates the image in the car will vibrate in the same way, but will be magnified due to the camera lens .
  • the result can be that passengers get nauseous especially when the motion of the car does not correspond to the motion on the screen .
  • This invention has been made to solve the above-mentioned problems, and an obj ect thereof is to provide an elevator in which an image corresponding to a car position can be shown inside the car, and which can cut down on costs and allow easy maintenance and inspection .
  • Fig . 1 is a block diagram illustrating an elevator according to an embodiment of this invention .
  • Fig . 2 is a conceptual diagram illustrating images of individual cameras before processing takes place in a processing unit shown in Fig . 1 , resulting in the corrected image .
  • Figs . 1 , 2 show the principle of this system.
  • Fig . 1 is a block diagram illustrating an elevator according to an embodiment of this invention .
  • a vertically extending elevator shaft 2 is provided inside a building 1.
  • a hoisting machine (not shown) serving as a drive device is installed inside the elevator shaft 2.
  • a main rope 5 is wound around a drive sheave of the hoisting machine .
  • a car 4 and a counterweight (not shown) are suspended inside the elevator shaft 2 by the main rope 5.
  • the car 4 is raised and lowered inside the elevator shaft 2 by the drive force of the hoisting machine .
  • a screen 3 is mounted inside the car 4.
  • the screen 3 is used as a virtual window to show the surroundings of the building .
  • the screen 3 is ultra thin and is transmissive, reflective, or emissive .
  • Attached to a facade 10 of the building 1 are a plurality of cameras (imaging devices ) 6 each viewing a part of the surroundings of the building 1.
  • the cameras 6 are fixed to the building 1 in a stationary, non-pivoting manner .
  • the cameras 6 are so spaced from each other that their respective imaging areas 7 are shifted from each other .
  • the cameras 6 are provided over the entire height of the building and equally spaced.
  • Each imaging area 7 partially overlaps a part of its adj acent imaging area 7. Further, each camera 6 converts information that is obtained as the camera partly views the surroundings of the building 1 into electrical image information for output .
  • the surroundings of the building 1 are electronically captured by a plurality of cameras 6 that are attached to the buildings facade 10.
  • Each camera 6 views an imaging area 7 of the surroundings of the building 1.
  • Each imaging area 7 partly overlaps the imaging area 7 of the adj acent camera 6 to ensure that the entire surroundings are captured .
  • each camera 6 corresponds with the edges of the adj acent imaging area 7. To do this , however, a lot of expensive physical adj ustment of each camera 6 is required. To keep the procedure simple, the cameras 6 are mounted less accurately to the facade 10 , but with an overlap of each imaging area 7. The adj ustment is later performed electronically by means of a processing unit 9.
  • a processing unit ( image processing device ) 9 for processing image information from the individual cameras ⁇ is electrically connected to the individual cameras 6.
  • the processing unit 9 is electrically connected to the individual cameras 6 via an information communication network (network) 16 consisting of wired cables (wires ) .
  • a car position detecting device (not shown) for detecting the position of the car 4 is electrically connected to the processing unit 9.
  • the processing unit 9 calculates the position of the car 4 based on position information from the car position detecting device .
  • the actual position of the car 4 is constantly sent to the processing unit 9.
  • the position information is used to decide the part (the display image portion)
  • position information indicative of the position of the car 4 is constantly sent from the car position detecting device to the processing unit 9.
  • the processing unit 9 is electrically connected to the proj ector .
  • the processing unit 9 is electrically connected to the proj ector via a link 17 consisting of a wired cable (wire ) .
  • the proj ector is adapted to show an image on the screen 3 based on information from the processing unit 9.
  • the processing unit 9 selects a part of the image information as a display image portion 8 and then performs processing to show the display image portion 8 on the screen 3. Specifically, the processing unit 9 selects as the display image portion 8 a portion of the individual image information which corresponds to the position of the car 4 (a portion within a fixed area at the same height as the position of the car 4 ) , performs corrections on the display image portion 8 , and then sends information of the corrected display image portion 8 to the proj ector .
  • the processing unit 9 can first collect all images from the cameras 6 via a network 16 and create one image out of it before selecting the display image portion 8 that is to be shown on the screen 3 via the link 17.
  • the proj ector shows an image on the screen 3 based on the information of the corrected display image portion 8. As a result , the view of the surroundings of the building 1 as seen from the position of the car 4 is shown on the screen 3.
  • the motion on the screen 3 in the car 4 must be perfectly synchronized with the car motion; otherwise passengers inside the car 4 might experience conflicts in seeing the images and feeling the car motion resulting in nausea .
  • the cameras 6 should slightly overlap in their images .
  • a correction vector is determined to align and rotate the images , such that it corresponds with the other cameras 6. The reason a correction is required, is that physically aligning each camera 6 is difficult and expensive .
  • the processing unit 9 grasps the actual position of the car 4 based on position information from the car position detecting device, and selects the images to show on the screen 3 from among image information from the individual cameras 6 based on the grasped position of the car 4.
  • the processing unit 9 performs processing for combining the selected image information . Further, the processing unit 9 can repeat this processing . As a result, the image shown on the screen 3 is continuously updated as the car 4 moves .
  • the images of one or more cameras 6 are selected and if necessary combined to create the actual view corresponding to the car position . This view is made visible on the screen 3 in the car 4.
  • Figure 2 shows the images 11-14 of the individual cameras 6 before processing takes place in the processing unit 9, resulting in the corrected image 15.
  • the positions and angles of images 11 to 14 obtained from the individual cameras 6 differ from each other due to, for example, errors in mounting the individual cameras 6 or the like .
  • the processing unit 9 adj usts the respective positions and angles of the images 11 to 14 such that the images 11 to 14 partly overlap each other . That is , the processing unit 9 selectively rotates and shifts each of the images 11 to 14 to align them (that is , to correct the images 11 to 14 ) , creating one total image 15 of the corrected images 11 to 14. Further, the processing unit 9 selects a part of the total image 15 as the display image portion 8 based on position information from the car position detecting device, and sends information of the display image portion 8 to the proj ector .
  • the processing unit 9 shifts and/or rotates each of the images 11-14 to be able to create one total image 15.
  • each camera 6 views a part of the surroundings of the building 1 in each of multiple imaging areas 7 and outputs image information corresponding to each imaging area 7
  • the car position detecting device detects the position of the car 4 to output position information
  • the processing unit 9 selects the display image portion 8 based on the image information and the position information and performs processing for showing the display image portion 8 on the screen 3 inside the car 4 , whereby the continuously changing image of the surroundings of the building 1 can be shown on the screen 3 inside the car 4 without displacing the individual cameras 6 relative to the building 1.
  • This configuration not only reduces the trouble associated with the mounting of the individual cameras 6 on the building 1 but also makes it possible to use inexpensive, commonly mass-produced cameras such as CCD or CMOS sensors as the cameras 6, enabling a reduction in cost . Further, the simplified mounting structure of the cameras 6 facilitates easy maintenance and inspection . Furthermore, less esthetic problems are involved with respect to the building facade, and the individual cameras 6 can be mounted on the building 1 without spoiling the exterior appearance .
  • each imaging area 7 partially overlaps a part of its adj acent imaging area 7 , which ensures that there will be no area that is not viewed by the cameras 6 due to an error in mounting the cameras 6, allowing continuous viewing of the surroundings of the building 1 with greater reliability.
  • image information from each camera 6 is electrically processable information, making it possible to process the image information with ease and at greater speed.
  • the processing unit 9 acquires image information from all the individual cameras 6, whereby it is not necessary for the individual cameras 6 to store the image information and the cameras 6 can be further simplified in structure .
  • the screen 3 for showing the display image portion 8 is provided inside the car 4 , ensuring increased sharpness of the image shown .
  • processing unit 9 acquires image information from all the individual cameras 6 before selecting the display image portion 8
  • the processing unit 9 may acquire from the cameras 6 only the display image portion 8 selected based on position information from the car position detecting device .
  • the processing unit 9 calculates which of the cameras 6 are nearest to the car position and retrieves only those images via the network 16 and processes only those images to create the display image portion 8 that is to be shown on the screen 3 via the link 17.
  • each camera 6 is provided with a storage portion for storing a compressed form of image information taken in each imaging area 7.
  • the portion of the image information selected by a request from the processing unit 9 is sent to the processing unit 9 as the display image portion 8. That is , several sets of image processing are distributed among the individual cameras 6 and the processing unit 9 for execution . This means that each camera 6 previously performs pre-processing to compress the image, and sends the compressed image to the processing unit 9 upon request .
  • the amount of information to be processed by the processing unit 9 can be reduced, making it possible to increase the throughput of the processing unit 9.
  • the processing unit 9 includes a storage portion (memory) for storing the image data to be superimposed and performs processing for superimposing the image of the image data to be superimposed on the display image portion 8 and showing the resulting image on the screen 3 inside the car 4.
  • the image of the image data to be superimposed is handled as an additional image different from that of the surroundings of the building 1.
  • the view could be virtually restricted by a wall that is added ( superimposed) on the image .
  • the processing unit 9 effects changes to the image to be shown inside the car 4 based on information indicative of the manipulations with the manipulation device for change .
  • the passengers can manipulate the system such that they can shift the image or zoom on to a location that they are interested in through manipulations with the manipulation device for change .
  • the screen 3 not only for showing the surroundings of the building 1 , but also to show the current floor, advertisements, outside weather information such as temperature and humidity, etc .
  • the image of the surroundings of the building 1 is shown inside the car 4 with respect to only one car 4 that is raised and lowered in the elevator shaft 2
  • the image of the surroundings of the building 1 may be shown inside the car with respect to each of the individual cars .
  • multiple car position detecting devices which independently detect the positions of the individual cars , are provided in the elevator shaft of each elevator . Further, the processing unit 9 selects multiple display image portion 8 corresponding to the individual cars based on position information from the individual car position detecting devices , and sends the corresponding display image portion 8 to each car . That is , the image corresponding to the individual car position is shown inside each car .
  • the image corresponding to each individual car can be shown inside each car through processing by a common processing unit 9, thus further facilitating image processing .
  • the processing unit 9 can acquire image information from the individual cameras 6 common to the processing unit 9, whereby one camera system can service a group of multiple elevators , making it unnecessary to mount multiple cameras on the building 1 in association with individual elevators . Therefore, the number of cameras can be reduced, leading to a reduction in cost .
  • the processing unit 9 is able to reproduce the entire surroundings it is possible to take multiple display image portions 8 to service not one elevator but multiple elevators , each car showing the surroundings corresponding to the individual car location .
  • the network 16 and the link 17 consist of wires

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Indicating And Signalling Devices For Elevators (AREA)
  • Cage And Drive Apparatuses For Elevators (AREA)

Abstract

In an elevator, multiple imaging devices are arranged so as to be spaced from each other in a height direction of a building. Each imaging device views a part of the surroundings of the building within each imaging area, and outputs image information corresponding to each imaging area. A car is raised and lowered in an elevator shaft provided inside the building. Further, in the elevator shaft, there is provided a car position detecting device which outputs position information upon detecting the car position. Based on the position information from the car position detecting device and the image information from each imaging device, an image processing device selects a part of the image information from each imaging device as a display image portion and performs processing for showing the display image portion inside the car.

Description

Description
Elevator
Technical Field
This invention relates to an elevator in which a car and a counterweight ascend and descend in an elevator shaft .
Background Art
Many times elevators in high rise buildings are located in the core of the complex . This means in general that it is impossible to have a window in the elevator car to have an external view . To reduce the possibility of claustrophobia and/or give the passengers something to look at ; it is possible to add a virtual window that provides a real-time external view . The solution is a computer screen or television screen integrated in for example the back wall of the elevator . Nowadays this is easy to accomplish since the thickness of screens has been reduced so far that they have little influence on the car area or the shaft area . For creating the image, cameras are fixed to the facade of the building .
Japanese Patent No .3484731 describes a couple of solutions to create such an external image . One solution described is to have one camera per elevator car on the facade that moves synchronous with the elevator . This means that on high rise buildings the camera has to travel along the entire facade of the building at the same (high) speed as the elevator . It is basically a separate elevator system for the camera .
The second solution describes a camera that is fixed to the facade of the building at a- height that is equal to half the travel height of the elevator car . To provide the image for the complete travel of the elevator car, the camera changes its viewing angle in vertical direction .
Disclosure of the Invention
However, these two solutions have the following disadvantages : If there is more than one elevator in the building, each elevator needs a separate camera system. In case the cameras move along the facade of the buildings , a lot of guide systems need to be provided that can overlap windows, etc .
Also, a separate camera system for each elevator car is not economical either, especially since there are a lot of mechanical parts that also require maintenance .
Moving the camera by means of a guide system also means the guide system must be perfectly straight to avoid vibration of the camera, because when the camera vibrates the image in the car will vibrate in the same way, but will be magnified due to the camera lens . The result can be that passengers get nauseous especially when the motion of the car does not correspond to the motion on the screen .
This invention has been made to solve the above-mentioned problems, and an obj ect thereof is to provide an elevator in which an image corresponding to a car position can be shown inside the car, and which can cut down on costs and allow easy maintenance and inspection .
Brief Description of the Drawings
Fig . 1 is a block diagram illustrating an elevator according to an embodiment of this invention .
Fig . 2 is a conceptual diagram illustrating images of individual cameras before processing takes place in a processing unit shown in Fig . 1 , resulting in the corrected image .
Best Mode for carrying out the Invention
Figs . 1 , 2 show the principle of this system. Fig . 1 is a block diagram illustrating an elevator according to an embodiment of this invention . Referring to Fig .1 , a vertically extending elevator shaft 2 is provided inside a building 1. A hoisting machine (not shown) serving as a drive device is installed inside the elevator shaft 2. A main rope 5 is wound around a drive sheave of the hoisting machine . A car 4 and a counterweight (not shown) are suspended inside the elevator shaft 2 by the main rope 5. The car 4 is raised and lowered inside the elevator shaft 2 by the drive force of the hoisting machine . That is , in most cases it is impossible to view the surroundings of the building from inside the car 4 even when a glass pane or window is provided in the car 4. To overcome this limitation a screen 3 is mounted inside the car 4. The screen 3 is used as a virtual window to show the surroundings of the building . Further, the screen 3 is ultra thin and is transmissive, reflective, or emissive .
Attached to a facade 10 of the building 1 are a plurality of cameras (imaging devices ) 6 each viewing a part of the surroundings of the building 1. The cameras 6 are fixed to the building 1 in a stationary, non-pivoting manner . The cameras 6 are so spaced from each other that their respective imaging areas 7 are shifted from each other . In this example, the cameras 6 are provided over the entire height of the building and equally spaced. Each imaging area 7 partially overlaps a part of its adj acent imaging area 7. Further, each camera 6 converts information that is obtained as the camera partly views the surroundings of the building 1 into electrical image information for output .
That is , the surroundings of the building 1 are electronically captured by a plurality of cameras 6 that are attached to the buildings facade 10. Each camera 6 views an imaging area 7 of the surroundings of the building 1. Each imaging area 7 partly overlaps the imaging area 7 of the adj acent camera 6 to ensure that the entire surroundings are captured . In theory, it is possible to mount the cameras 6 to the building 1 in such a way that the edges of each imaging area
7 correspond with the edges of the adj acent imaging area 7. To do this , however, a lot of expensive physical adj ustment of each camera 6 is required. To keep the procedure simple, the cameras 6 are mounted less accurately to the facade 10 , but with an overlap of each imaging area 7. The adj ustment is later performed electronically by means of a processing unit 9.
A processing unit ( image processing device ) 9 for processing image information from the individual cameras β is electrically connected to the individual cameras 6. In this example, the processing unit 9 is electrically connected to the individual cameras 6 via an information communication network (network) 16 consisting of wired cables (wires ) . Further, a car position detecting device (not shown) for detecting the position of the car 4 is electrically connected to the processing unit 9. The processing unit 9 calculates the position of the car 4 based on position information from the car position detecting device .
When the elevator is operating, the actual position of the car 4 is constantly sent to the processing unit 9. The position information is used to decide the part (the display image portion)
8 of the image to show on the screen 3 inside the car 4.
That is, to reproduce the view of the surroundings of the building 1 as seen from the position of the car 4 , position information indicative of the position of the car 4 is constantly sent from the car position detecting device to the processing unit 9.
Mounted to the car 4 is a proj ector (not shown) for showing an image on the screen 3. The processing unit 9 is electrically connected to the proj ector . In this example, the processing unit 9 is electrically connected to the proj ector via a link 17 consisting of a wired cable (wire ) . The proj ector is adapted to show an image on the screen 3 based on information from the processing unit 9.
Based on image information from the individual cameras 6 and position information from the car position detecting device, the processing unit 9 selects a part of the image information as a display image portion 8 and then performs processing to show the display image portion 8 on the screen 3. Specifically, the processing unit 9 selects as the display image portion 8 a portion of the individual image information which corresponds to the position of the car 4 (a portion within a fixed area at the same height as the position of the car 4 ) , performs corrections on the display image portion 8 , and then sends information of the corrected display image portion 8 to the proj ector .
That is , the processing unit 9 can first collect all images from the cameras 6 via a network 16 and create one image out of it before selecting the display image portion 8 that is to be shown on the screen 3 via the link 17.
The proj ector shows an image on the screen 3 based on the information of the corrected display image portion 8. As a result , the view of the surroundings of the building 1 as seen from the position of the car 4 is shown on the screen 3.
The motion on the screen 3 in the car 4 must be perfectly synchronized with the car motion; otherwise passengers inside the car 4 might experience conflicts in seeing the images and feeling the car motion resulting in nausea .
To make this possible the cameras 6 should slightly overlap in their images . Just after installation for each camera 6 a correction vector is determined to align and rotate the images , such that it corresponds with the other cameras 6. The reason a correction is required, is that physically aligning each camera 6 is difficult and expensive .
In this way, one total image of the outside is created based on the images from all the cameras 6. That is , the processing unit 9 grasps the actual position of the car 4 based on position information from the car position detecting device, and selects the images to show on the screen 3 from among image information from the individual cameras 6 based on the grasped position of the car 4. The processing unit 9 performs processing for combining the selected image information . Further, the processing unit 9 can repeat this processing . As a result, the image shown on the screen 3 is continuously updated as the car 4 moves .
Depending on the car position, the images of one or more cameras 6 are selected and if necessary combined to create the actual view corresponding to the car position . This view is made visible on the screen 3 in the car 4.
Since the passengers in the car 4 are rather close to the screen 3 , it is necessary to use a high resolution image ; this also requires high resolution cameras, which results in large data streams between the cameras 6 and the processing unit 9. To have a fluent motion, around 30 images per second need to be supplied to the screen 3 in the car 4.
Figure 2 shows the images 11-14 of the individual cameras 6 before processing takes place in the processing unit 9, resulting in the corrected image 15.
In the processing unit 9, the positions and angles of images 11 to 14 obtained from the individual cameras 6 differ from each other due to, for example, errors in mounting the individual cameras 6 or the like . The processing unit 9 adj usts the respective positions and angles of the images 11 to 14 such that the images 11 to 14 partly overlap each other . That is , the processing unit 9 selectively rotates and shifts each of the images 11 to 14 to align them (that is , to correct the images 11 to 14 ) , creating one total image 15 of the corrected images 11 to 14. Further, the processing unit 9 selects a part of the total image 15 as the display image portion 8 based on position information from the car position detecting device, and sends information of the display image portion 8 to the proj ector . That is , when the cameras 6 are mounted to the facade 10 the requirement is that the images 11-14 shall at least overlap as can be seen in the example . Although not intended, it is possible for a slightly rotated image 13 to occur . The processing unit 9 shifts and/or rotates each of the images 11-14 to be able to create one total image 15.
In the elevator as described above, each camera 6 views a part of the surroundings of the building 1 in each of multiple imaging areas 7 and outputs image information corresponding to each imaging area 7 , the car position detecting device detects the position of the car 4 to output position information, and the processing unit 9 selects the display image portion 8 based on the image information and the position information and performs processing for showing the display image portion 8 on the screen 3 inside the car 4 , whereby the continuously changing image of the surroundings of the building 1 can be shown on the screen 3 inside the car 4 without displacing the individual cameras 6 relative to the building 1. This configuration not only reduces the trouble associated with the mounting of the individual cameras 6 on the building 1 but also makes it possible to use inexpensive, commonly mass-produced cameras such as CCD or CMOS sensors as the cameras 6, enabling a reduction in cost . Further, the simplified mounting structure of the cameras 6 facilitates easy maintenance and inspection . Furthermore, less esthetic problems are involved with respect to the building facade, and the individual cameras 6 can be mounted on the building 1 without spoiling the exterior appearance .
Further, each imaging area 7 partially overlaps a part of its adj acent imaging area 7 , which ensures that there will be no area that is not viewed by the cameras 6 due to an error in mounting the cameras 6, allowing continuous viewing of the surroundings of the building 1 with greater reliability.
Further, image information from each camera 6 is electrically processable information, making it possible to process the image information with ease and at greater speed.
Further, prior to selecting the display image portion 8 , the processing unit 9 acquires image information from all the individual cameras 6, whereby it is not necessary for the individual cameras 6 to store the image information and the cameras 6 can be further simplified in structure .
Further, the screen 3 for showing the display image portion 8 is provided inside the car 4 , ensuring increased sharpness of the image shown .
While in the above-described example the processing unit 9 acquires image information from all the individual cameras 6 before selecting the display image portion 8 , the processing unit 9 may acquire from the cameras 6 only the display image portion 8 selected based on position information from the car position detecting device .
That is , the processing unit 9 calculates which of the cameras 6 are nearest to the car position and retrieves only those images via the network 16 and processes only those images to create the display image portion 8 that is to be shown on the screen 3 via the link 17.
In this case, each camera 6 is provided with a storage portion for storing a compressed form of image information taken in each imaging area 7. Of image information stored in the storage portion of each camera 6, the portion of the image information selected by a request from the processing unit 9 is sent to the processing unit 9 as the display image portion 8. That is , several sets of image processing are distributed among the individual cameras 6 and the processing unit 9 for execution . This means that each camera 6 previously performs pre-processing to compress the image, and sends the compressed image to the processing unit 9 upon request .
Accordingly, the amount of information to be processed by the processing unit 9 can be reduced, making it possible to increase the throughput of the processing unit 9.
Further, while in the above-described example only the display image portion 8 corresponding to the surroundings of the building 1 is processed by the processing unit 9 to be shown on the screen 3 , an image corresponding to superimposed image data, which can be superimposed on the display image portion 8 , may be superimposed on the display image portion 8 and shown on the screen 3 inside the car 4. In this case, the processing unit 9 includes a storage portion (memory) for storing the image data to be superimposed and performs processing for superimposing the image of the image data to be superimposed on the display image portion 8 and showing the resulting image on the screen 3 inside the car 4. The image of the image data to be superimposed is handled as an additional image different from that of the surroundings of the building 1.
As a result , near the elevator hall on the bottom floor, for example, the view could be virtually restricted by a wall that is added ( superimposed) on the image .
By using the superimposing technique, it is also possible to add for example advertising to the real world image in such a way that it appears to be part of the outside world. This method makes it possible to change the advertisements regularly and/or add advertisements to locations outside where a physical sign is not permitted .
Further, while in the above example the image shown on the screen 3 is continuously updated through processing by the processing unit 9 and no manipulation on the image can be performed inside the car 4 , it is also possible to provide inside the car 4 amanipulation device for change with which changes can be made to the image shown inside the car 4 through manipulations inside the car 4. In this case, the processing unit 9 effects changes to the image to be shown inside the car 4 based on information indicative of the manipulations with the manipulation device for change .
The passengers can manipulate the system such that they can shift the image or zoom on to a location that they are interested in through manipulations with the manipulation device for change .
Further, it is possible to use the screen 3 not only for showing the surroundings of the building 1 , but also to show the current floor, advertisements, outside weather information such as temperature and humidity, etc .
Further, while in the above-described example the image of the surroundings of the building 1 is shown inside the car 4 with respect to only one car 4 that is raised and lowered in the elevator shaft 2 , when multiple elevators are provided in the building 1 , the image of the surroundings of the building 1 may be shown inside the car with respect to each of the individual cars .
In this case, multiple car position detecting devices , which independently detect the positions of the individual cars , are provided in the elevator shaft of each elevator . Further, the processing unit 9 selects multiple display image portion 8 corresponding to the individual cars based on position information from the individual car position detecting devices , and sends the corresponding display image portion 8 to each car . That is , the image corresponding to the individual car position is shown inside each car .
As a result, even when multiple elevators are provided in the building 1 , the image corresponding to each individual car can be shown inside each car through processing by a common processing unit 9, thus further facilitating image processing . Further, in order to show the image corresponding to the individual car position inside each car with respect to multiple elevators , the processing unit 9 can acquire image information from the individual cameras 6 common to the processing unit 9, whereby one camera system can service a group of multiple elevators , making it unnecessary to mount multiple cameras on the building 1 in association with individual elevators . Therefore, the number of cameras can be reduced, leading to a reduction in cost .
That is , since the processing unit 9 is able to reproduce the entire surroundings it is possible to take multiple display image portions 8 to service not one elevator but multiple elevators , each car showing the surroundings corresponding to the individual car location .
Further, while in the above-described example the network 16 and the link 17 consist of wires, it is also possible to use a wireless connection to transfer information among the cameras 6, the processing unit 9, and the proj ector .

Claims

Claims
1. An elevator characterized by comprising : a plurality of imaging devices arranged at a spacing from each other in a height direction of a building and each partly viewing surroundings of the building divided into a plurality of imaging areas, the imaging devices outputting image information corresponding to each of the imaging areas; a car raised and lowered in an elevator shaft provided inside the building; a car position detecting device which outputs position information upon detecting a position of the car; an image processing device which, based on the position information from the car position detecting device and the image information from each of the imaging devices , selects a portion of the image information corresponding to a position of the car as a display image portion, and performs processing for showing the display image portion inside the car .
2. An elevator according to Claim 1 , characterized in that the imaging areas that are adj acent to each other partly overlap each other .
3. An elevator according to Claim 1 or 2 , characterized in that the image information is electrically processable information .
4. An elevator according to any one of Claims 1 through 3 , characterized in that the image processing device calculates the display image portion based on the position information and acquires only the display image portion from each of the imaging devices .
5. An elevator according to any one of Claims 1 through 3 , characterized in that the image processing device acquires the image information from all of the imaging devices before selecting the display image portion .
6. An elevator according to any one of Claims 1 through 5, characterized in that a screen for showing the display image portion is provided inside the car .
7. An elevator according to any one of Claims 1 through 6, characterized in that the image processing device has a storage portion for storing image data to be superimposed that can be superimposed on the display image potion, and performs processing for superimposing an image of the image data to be superimposed on the display image portion and showing the resulting image inside the car .
8. An elevator according to any one of Claims 1 through 7 , characterized in that the image of the image data to be superimposed is an additional image different from that of the surroundings of the building .
9. An elevator according to any one of Claims 1 through 8 , characterized in that a manipulation device for change, with which changes can be made to an image shown inside the car through manipulations made inside the car based on the processing by the image processing device, is provided inside the car .
10. An elevator according to any one of Claims 1 through 9 , characterized in that : a plurality of the cars are raised and lowered in the elevator shaft; a plurality of the carposition detecting devices independently detect respective positions of the cars ; and the image processing device selects a part of the image information as the display image portion corresponding to each of the cars based on the image information and the position information from each of the car position detecting devices, and performs processing for showing the display image portion inside each of the cars .
PCT/JP2005/001029 2005-01-20 2005-01-20 Elevator WO2006077654A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP05704139A EP1838605A4 (en) 2005-01-20 2005-01-20 Elevator
PCT/JP2005/001029 WO2006077654A1 (en) 2005-01-20 2005-01-20 Elevator
CNA2005800105411A CN1942384A (en) 2005-01-20 2005-01-20 Elevator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2005/001029 WO2006077654A1 (en) 2005-01-20 2005-01-20 Elevator

Publications (1)

Publication Number Publication Date
WO2006077654A1 true WO2006077654A1 (en) 2006-07-27

Family

ID=36692052

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/001029 WO2006077654A1 (en) 2005-01-20 2005-01-20 Elevator

Country Status (3)

Country Link
EP (1) EP1838605A4 (en)
CN (1) CN1942384A (en)
WO (1) WO2006077654A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013016921A1 (en) * 2013-10-11 2015-04-16 Oliver Bunsen Image display system and method for motion-synchronous image display in a means of transport
US10941018B2 (en) 2018-01-04 2021-03-09 Otis Elevator Company Elevator auto-positioning for validating maintenance
US10961082B2 (en) 2018-01-02 2021-03-30 Otis Elevator Company Elevator inspection using automated sequencing of camera presets

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010004547A1 (en) * 2008-06-17 2010-01-14 Digigage Ltd. System for altering virtual views
CN104787633B (en) * 2015-04-17 2017-04-12 管存忠 Single-camera real-time synchronous shooting panoramic lift
WO2017006147A1 (en) * 2015-07-03 2017-01-12 Otis Elevator Company Elevator car wall imaging system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63139277U (en) * 1987-03-06 1988-09-13
JPH06211450A (en) * 1992-11-24 1994-08-02 Sanyo Electric Co Ltd Elevator system
JPH0781858A (en) * 1993-09-17 1995-03-28 Mitsubishi Electric Corp Elevator image information system
JPH09194167A (en) * 1996-01-19 1997-07-29 Sanyo Electric Co Ltd Elevator system
JPH1179580A (en) * 1997-09-03 1999-03-23 Mitsubishi Denki Bill Techno Service Kk External image displaying device for elevator

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5485897A (en) * 1992-11-24 1996-01-23 Sanyo Electric Co., Ltd. Elevator display system using composite images to display car position

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63139277U (en) * 1987-03-06 1988-09-13
JPH06211450A (en) * 1992-11-24 1994-08-02 Sanyo Electric Co Ltd Elevator system
JPH0781858A (en) * 1993-09-17 1995-03-28 Mitsubishi Electric Corp Elevator image information system
JPH09194167A (en) * 1996-01-19 1997-07-29 Sanyo Electric Co Ltd Elevator system
JPH1179580A (en) * 1997-09-03 1999-03-23 Mitsubishi Denki Bill Techno Service Kk External image displaying device for elevator

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1838605A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013016921A1 (en) * 2013-10-11 2015-04-16 Oliver Bunsen Image display system and method for motion-synchronous image display in a means of transport
US10961082B2 (en) 2018-01-02 2021-03-30 Otis Elevator Company Elevator inspection using automated sequencing of camera presets
US10941018B2 (en) 2018-01-04 2021-03-09 Otis Elevator Company Elevator auto-positioning for validating maintenance

Also Published As

Publication number Publication date
EP1838605A4 (en) 2012-06-27
EP1838605A1 (en) 2007-10-03
CN1942384A (en) 2007-04-04

Similar Documents

Publication Publication Date Title
WO2006077654A1 (en) Elevator
US9628772B2 (en) Method and video communication device for transmitting video to a remote user
EP2818948B1 (en) Method and data presenting device for assisting a remote user to provide instructions
US9571798B2 (en) Device for displaying the situation outside a building with a lift
JP4195991B2 (en) Surveillance video monitoring system, surveillance video generation method, and surveillance video monitoring server
KR101981850B1 (en) Hollow cylindrical rotating full-color led display apparatus
JP5516344B2 (en) Traffic vehicle monitoring system and vehicle monitoring camera
CN101209791A (en) Elevator long distance point detection system
EP1128676A2 (en) Intruding object monitoring method and intruding object monitoring system
WO2004106858A1 (en) Stereo camera system and stereo optical module
CN101557975A (en) Platform screen door
JP4475164B2 (en) Monitoring system and monitoring method
JP2012099013A (en) Passing vehicle monitoring system and vehicle monitoring camera
JP2023024827A (en) elevator
JP6955584B2 (en) Door image display system and monitor
JP5506656B2 (en) Image processing device
KR100847182B1 (en) Elevator
JPWO2020039897A1 (en) Station monitoring system and station monitoring method
US20140152857A1 (en) Camera Apparatus and System
JP5210251B2 (en) Elevator landscape video display
US20200396385A1 (en) Imaging device, method for controlling imaging device, and recording medium
JP6230223B2 (en) Display direction control system and display position control system
JP2010028401A (en) Monitoring camera system
JP2004297405A (en) Apparatus, system, and method for photographing
JPH1093955A (en) Remote monitoring device for image of elevator

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2006515397

Country of ref document: JP

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2005704139

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 200580010541.1

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 1020067021858

Country of ref document: KR

NENP Non-entry into the national phase

Ref country code: DE

WWP Wipo information: published in national office

Ref document number: 2005704139

Country of ref document: EP