Description
Elevator
Technical Field
This invention relates to an elevator in which a car and a counterweight ascend and descend in an elevator shaft .
Background Art
Many times elevators in high rise buildings are located in the core of the complex . This means in general that it is impossible to have a window in the elevator car to have an external view . To reduce the possibility of claustrophobia and/or give the passengers something to look at ; it is possible to add a virtual window that provides a real-time external view . The solution is a computer screen or television screen integrated in for example the back wall of the elevator . Nowadays this is easy to accomplish since the thickness of screens has been reduced so far that they have little influence on the car area or the shaft area . For creating the image, cameras are fixed to the facade of the building .
Japanese Patent No .3484731 describes a couple of solutions to create such an external image . One solution described is to have one camera per elevator car on the facade that moves synchronous with the elevator . This means that on high rise buildings the camera
has to travel along the entire facade of the building at the same (high) speed as the elevator . It is basically a separate elevator system for the camera .
The second solution describes a camera that is fixed to the facade of the building at a- height that is equal to half the travel height of the elevator car . To provide the image for the complete travel of the elevator car, the camera changes its viewing angle in vertical direction .
Disclosure of the Invention
However, these two solutions have the following disadvantages : If there is more than one elevator in the building, each elevator needs a separate camera system. In case the cameras move along the facade of the buildings , a lot of guide systems need to be provided that can overlap windows, etc .
Also, a separate camera system for each elevator car is not economical either, especially since there are a lot of mechanical parts that also require maintenance .
Moving the camera by means of a guide system also means the guide system must be perfectly straight to avoid vibration of the camera, because when the camera vibrates the image in the car will vibrate in the same way, but will be magnified due to the camera lens . The result can be that passengers get nauseous especially when the motion of the car does not correspond to the motion on
the screen .
This invention has been made to solve the above-mentioned problems, and an obj ect thereof is to provide an elevator in which an image corresponding to a car position can be shown inside the car, and which can cut down on costs and allow easy maintenance and inspection .
Brief Description of the Drawings
Fig . 1 is a block diagram illustrating an elevator according to an embodiment of this invention .
Fig . 2 is a conceptual diagram illustrating images of individual cameras before processing takes place in a processing unit shown in Fig . 1 , resulting in the corrected image .
Best Mode for carrying out the Invention
Figs . 1 , 2 show the principle of this system. Fig . 1 is a block diagram illustrating an elevator according to an embodiment of this invention . Referring to Fig .1 , a vertically extending elevator shaft 2 is provided inside a building 1. A hoisting machine (not shown) serving as a drive device is installed inside the elevator shaft 2. A main rope 5 is wound around a drive sheave of the hoisting machine . A car 4 and a counterweight (not shown) are suspended inside the elevator shaft 2 by the main rope 5. The car 4 is raised and lowered inside the elevator shaft 2 by
the drive force of the hoisting machine . That is , in most cases it is impossible to view the surroundings of the building from inside the car 4 even when a glass pane or window is provided in the car 4. To overcome this limitation a screen 3 is mounted inside the car 4. The screen 3 is used as a virtual window to show the surroundings of the building . Further, the screen 3 is ultra thin and is transmissive, reflective, or emissive .
Attached to a facade 10 of the building 1 are a plurality of cameras (imaging devices ) 6 each viewing a part of the surroundings of the building 1. The cameras 6 are fixed to the building 1 in a stationary, non-pivoting manner . The cameras 6 are so spaced from each other that their respective imaging areas 7 are shifted from each other . In this example, the cameras 6 are provided over the entire height of the building and equally spaced. Each imaging area 7 partially overlaps a part of its adj acent imaging area 7. Further, each camera 6 converts information that is obtained as the camera partly views the surroundings of the building 1 into electrical image information for output .
That is , the surroundings of the building 1 are electronically captured by a plurality of cameras 6 that are attached to the buildings facade 10. Each camera 6 views an imaging area 7 of the surroundings of the building 1. Each imaging area 7 partly overlaps the imaging area 7 of the adj acent camera 6 to ensure that the entire surroundings are captured . In theory, it is possible to mount the cameras 6 to
the building 1 in such a way that the edges of each imaging area
7 correspond with the edges of the adj acent imaging area 7. To do this , however, a lot of expensive physical adj ustment of each camera 6 is required. To keep the procedure simple, the cameras 6 are mounted less accurately to the facade 10 , but with an overlap of each imaging area 7. The adj ustment is later performed electronically by means of a processing unit 9.
A processing unit ( image processing device ) 9 for processing image information from the individual cameras β is electrically connected to the individual cameras 6. In this example, the processing unit 9 is electrically connected to the individual cameras 6 via an information communication network (network) 16 consisting of wired cables (wires ) . Further, a car position detecting device (not shown) for detecting the position of the car 4 is electrically connected to the processing unit 9. The processing unit 9 calculates the position of the car 4 based on position information from the car position detecting device .
When the elevator is operating, the actual position of the car 4 is constantly sent to the processing unit 9. The position information is used to decide the part (the display image portion)
8 of the image to show on the screen 3 inside the car 4.
That is, to reproduce the view of the surroundings of the building 1 as seen from the position of the car 4 , position information indicative of the position of the car 4 is constantly sent from
the car position detecting device to the processing unit 9.
Mounted to the car 4 is a proj ector (not shown) for showing an image on the screen 3. The processing unit 9 is electrically connected to the proj ector . In this example, the processing unit 9 is electrically connected to the proj ector via a link 17 consisting of a wired cable (wire ) . The proj ector is adapted to show an image on the screen 3 based on information from the processing unit 9.
Based on image information from the individual cameras 6 and position information from the car position detecting device, the processing unit 9 selects a part of the image information as a display image portion 8 and then performs processing to show the display image portion 8 on the screen 3. Specifically, the processing unit 9 selects as the display image portion 8 a portion of the individual image information which corresponds to the position of the car 4 (a portion within a fixed area at the same height as the position of the car 4 ) , performs corrections on the display image portion 8 , and then sends information of the corrected display image portion 8 to the proj ector .
That is , the processing unit 9 can first collect all images from the cameras 6 via a network 16 and create one image out of it before selecting the display image portion 8 that is to be shown on the screen 3 via the link 17.
The proj ector shows an image on the screen 3 based on the information of the corrected display image portion 8. As a result ,
the view of the surroundings of the building 1 as seen from the position of the car 4 is shown on the screen 3.
The motion on the screen 3 in the car 4 must be perfectly synchronized with the car motion; otherwise passengers inside the car 4 might experience conflicts in seeing the images and feeling the car motion resulting in nausea .
To make this possible the cameras 6 should slightly overlap in their images . Just after installation for each camera 6 a correction vector is determined to align and rotate the images , such that it corresponds with the other cameras 6. The reason a correction is required, is that physically aligning each camera 6 is difficult and expensive .
In this way, one total image of the outside is created based on the images from all the cameras 6. That is , the processing unit 9 grasps the actual position of the car 4 based on position information from the car position detecting device, and selects the images to show on the screen 3 from among image information from the individual cameras 6 based on the grasped position of the car 4. The processing unit 9 performs processing for combining the selected image information . Further, the processing unit 9 can repeat this processing . As a result, the image shown on the screen 3 is continuously updated as the car 4 moves .
Depending on the car position, the images of one or more cameras 6 are selected and if necessary combined to create the actual view
corresponding to the car position . This view is made visible on the screen 3 in the car 4.
Since the passengers in the car 4 are rather close to the screen 3 , it is necessary to use a high resolution image ; this also requires high resolution cameras, which results in large data streams between the cameras 6 and the processing unit 9. To have a fluent motion, around 30 images per second need to be supplied to the screen 3 in the car 4.
Figure 2 shows the images 11-14 of the individual cameras 6 before processing takes place in the processing unit 9, resulting in the corrected image 15.
In the processing unit 9, the positions and angles of images 11 to 14 obtained from the individual cameras 6 differ from each other due to, for example, errors in mounting the individual cameras 6 or the like . The processing unit 9 adj usts the respective positions and angles of the images 11 to 14 such that the images 11 to 14 partly overlap each other . That is , the processing unit 9 selectively rotates and shifts each of the images 11 to 14 to align them (that is , to correct the images 11 to 14 ) , creating one total image 15 of the corrected images 11 to 14. Further, the processing unit 9 selects a part of the total image 15 as the display image portion 8 based on position information from the car position detecting device, and sends information of the display image portion 8 to the proj ector .
That is , when the cameras 6 are mounted to the facade 10 the requirement is that the images 11-14 shall at least overlap as can be seen in the example . Although not intended, it is possible for a slightly rotated image 13 to occur . The processing unit 9 shifts and/or rotates each of the images 11-14 to be able to create one total image 15.
In the elevator as described above, each camera 6 views a part of the surroundings of the building 1 in each of multiple imaging areas 7 and outputs image information corresponding to each imaging area 7 , the car position detecting device detects the position of the car 4 to output position information, and the processing unit 9 selects the display image portion 8 based on the image information and the position information and performs processing for showing the display image portion 8 on the screen 3 inside the car 4 , whereby the continuously changing image of the surroundings of the building 1 can be shown on the screen 3 inside the car 4 without displacing the individual cameras 6 relative to the building 1. This configuration not only reduces the trouble associated with the mounting of the individual cameras 6 on the building 1 but also makes it possible to use inexpensive, commonly mass-produced cameras such as CCD or CMOS sensors as the cameras 6, enabling a reduction in cost . Further, the simplified mounting structure of the cameras 6 facilitates easy maintenance and inspection . Furthermore, less esthetic problems are involved with respect to the building facade,
and the individual cameras 6 can be mounted on the building 1 without spoiling the exterior appearance .
Further, each imaging area 7 partially overlaps a part of its adj acent imaging area 7 , which ensures that there will be no area that is not viewed by the cameras 6 due to an error in mounting the cameras 6, allowing continuous viewing of the surroundings of the building 1 with greater reliability.
Further, image information from each camera 6 is electrically processable information, making it possible to process the image information with ease and at greater speed.
Further, prior to selecting the display image portion 8 , the processing unit 9 acquires image information from all the individual cameras 6, whereby it is not necessary for the individual cameras 6 to store the image information and the cameras 6 can be further simplified in structure .
Further, the screen 3 for showing the display image portion 8 is provided inside the car 4 , ensuring increased sharpness of the image shown .
While in the above-described example the processing unit 9 acquires image information from all the individual cameras 6 before selecting the display image portion 8 , the processing unit 9 may acquire from the cameras 6 only the display image portion 8 selected based on position information from the car position detecting device .
That is , the processing unit 9 calculates which of the cameras
6 are nearest to the car position and retrieves only those images via the network 16 and processes only those images to create the display image portion 8 that is to be shown on the screen 3 via the link 17.
In this case, each camera 6 is provided with a storage portion for storing a compressed form of image information taken in each imaging area 7. Of image information stored in the storage portion of each camera 6, the portion of the image information selected by a request from the processing unit 9 is sent to the processing unit 9 as the display image portion 8. That is , several sets of image processing are distributed among the individual cameras 6 and the processing unit 9 for execution . This means that each camera 6 previously performs pre-processing to compress the image, and sends the compressed image to the processing unit 9 upon request .
Accordingly, the amount of information to be processed by the processing unit 9 can be reduced, making it possible to increase the throughput of the processing unit 9.
Further, while in the above-described example only the display image portion 8 corresponding to the surroundings of the building 1 is processed by the processing unit 9 to be shown on the screen 3 , an image corresponding to superimposed image data, which can be superimposed on the display image portion 8 , may be superimposed on the display image portion 8 and shown on the screen 3 inside the car 4.
In this case, the processing unit 9 includes a storage portion (memory) for storing the image data to be superimposed and performs processing for superimposing the image of the image data to be superimposed on the display image portion 8 and showing the resulting image on the screen 3 inside the car 4. The image of the image data to be superimposed is handled as an additional image different from that of the surroundings of the building 1.
As a result , near the elevator hall on the bottom floor, for example, the view could be virtually restricted by a wall that is added ( superimposed) on the image .
By using the superimposing technique, it is also possible to add for example advertising to the real world image in such a way that it appears to be part of the outside world. This method makes it possible to change the advertisements regularly and/or add advertisements to locations outside where a physical sign is not permitted .
Further, while in the above example the image shown on the screen 3 is continuously updated through processing by the processing unit 9 and no manipulation on the image can be performed inside the car 4 , it is also possible to provide inside the car 4 amanipulation device for change with which changes can be made to the image shown inside the car 4 through manipulations inside the car 4. In this case, the processing unit 9 effects changes to the image to be shown inside the car 4 based on information indicative of the manipulations
with the manipulation device for change .
The passengers can manipulate the system such that they can shift the image or zoom on to a location that they are interested in through manipulations with the manipulation device for change .
Further, it is possible to use the screen 3 not only for showing the surroundings of the building 1 , but also to show the current floor, advertisements, outside weather information such as temperature and humidity, etc .
Further, while in the above-described example the image of the surroundings of the building 1 is shown inside the car 4 with respect to only one car 4 that is raised and lowered in the elevator shaft 2 , when multiple elevators are provided in the building 1 , the image of the surroundings of the building 1 may be shown inside the car with respect to each of the individual cars .
In this case, multiple car position detecting devices , which independently detect the positions of the individual cars , are provided in the elevator shaft of each elevator . Further, the processing unit 9 selects multiple display image portion 8 corresponding to the individual cars based on position information from the individual car position detecting devices , and sends the corresponding display image portion 8 to each car . That is , the image corresponding to the individual car position is shown inside each car .
As a result, even when multiple elevators are provided in the
building 1 , the image corresponding to each individual car can be shown inside each car through processing by a common processing unit 9, thus further facilitating image processing . Further, in order to show the image corresponding to the individual car position inside each car with respect to multiple elevators , the processing unit 9 can acquire image information from the individual cameras 6 common to the processing unit 9, whereby one camera system can service a group of multiple elevators , making it unnecessary to mount multiple cameras on the building 1 in association with individual elevators . Therefore, the number of cameras can be reduced, leading to a reduction in cost .
That is , since the processing unit 9 is able to reproduce the entire surroundings it is possible to take multiple display image portions 8 to service not one elevator but multiple elevators , each car showing the surroundings corresponding to the individual car location .
Further, while in the above-described example the network 16 and the link 17 consist of wires, it is also possible to use a wireless connection to transfer information among the cameras 6, the processing unit 9, and the proj ector .