Disclosure of Invention
The embodiment of the specification aims to provide a more effective scheme for displaying the car damage information so as to solve the defects in the prior art.
To achieve the above object, one aspect of the present specification provides a method for displaying damage information, including:
Acquiring at least one image of a vehicle including vehicle damage information;
Generating a three-dimensional model of the vehicle based on the at least one image;
generating, based on the at least one image, respective texture images at a plurality of surface locations of the three-dimensional model, wherein the plurality of surface locations are determined based on the at least one image;
Performing texture mapping at the plurality of surface locations with their respective texture images, respectively, to obtain a mapped three-dimensional model; and
Displaying the three-dimensional model of the mapped image to display the damage information.
in one embodiment, the method for displaying the vehicle damage information further comprises, after acquiring at least one image of the vehicle including the vehicle damage information, acquiring model information of the vehicle, and acquiring a pre-modeled three-dimensional model based on the model information, wherein generating the three-dimensional model of the vehicle based on the at least one image comprises modifying the pre-modeled three-dimensional model based on the at least one image to generate the three-dimensional model of the vehicle.
In one embodiment, the method for displaying vehicle damage information further comprises, after texture mapping at the plurality of surface locations with their respective texture images, respectively, obtaining vehicle damage information based on the at least one image, and adding information to the mapped three-dimensional model based on the vehicle damage information.
in one embodiment, in the method for displaying vehicle damage information, the obtaining of vehicle damage information based on the at least one image includes performing vehicle damage detection and identification based on the at least one image using a predetermined algorithm to predict at least one of the following information: damaged parts, damaged location, type of damage, and degree of damage.
in one embodiment, in the method of presenting damage information, adding information to the mapped three-dimensional model based on the vehicle damage information comprises adding at least one of:
adding information for highlighting the lesion location on the mapped three-dimensional model; and
Adding information relating to at least one of: damaged parts, type of damage and degree of damage.
In one embodiment, in the method of presenting damage information, adding information on the mapped three-dimensional model for highlighting the damage location comprises highlighting at the damage location.
In one embodiment, in the method of presenting damage information, information relating to at least one of the following is added to the mapped three-dimensional model: the damaged part, the type of damage, and the degree of damage include, on the mapped three-dimensional model, differentially displaying in different colors any of: different damaged parts, different damage types and different damage degrees.
in one embodiment, in the method of presenting damage information, information relating to at least one of the following is added to the mapped three-dimensional model: the damaged part, the type of damage, and the degree of damage include adding textual information relating to at least one of: damaged parts, type of damage and degree of damage.
In one embodiment, in the method of presenting damage information, the vehicle damage information comprises a plurality of damage locations, wherein presenting the mapped three-dimensional model comprises automatically presenting each damage location of the mapped three-dimensional model in turn based on a distribution of the plurality of damage locations.
in one embodiment, in the method for displaying car damage information, the plurality of damage locations includes a first damage location corresponding to a first damage, wherein automatically displaying each damage location of the mapped three-dimensional model in turn includes displaying the first damage at the first damage location with an optimal viewing angle, the optimal viewing angle including a first angle and a first distance, wherein the first angle is an angle directly facing the first damage, and the first distance is determined based on a position and a degree of the first damage.
In one embodiment, in the method of presenting damage information, the presenting the mapped three-dimensional model comprises interactively presenting the mapped three-dimensional model.
In one embodiment, in the method for displaying the car damage information, the displaying the mapped three-dimensional model includes displaying the mapped three-dimensional model through any one of the following display devices: planar displays, immersive panoramic displays, VR/AR head displays, and holographic projection devices.
Another aspect of the present specification provides an apparatus for displaying vehicle damage information, including:
A first acquisition unit configured to acquire at least one image of a vehicle including vehicle damage information;
a first generating unit configured to generate a three-dimensional model of the vehicle based on the at least one image;
A second generating unit configured to generate, based on the at least one image, respective texture images at a plurality of surface positions of the three-dimensional model, wherein the plurality of surface positions are determined based on the at least one image;
A second obtaining unit configured to perform texture mapping at the plurality of surface positions with their respective texture images, respectively, to obtain mapped three-dimensional models; and
a presentation unit configured to present the mapped three-dimensional model to present the damage information.
In one embodiment, the device for displaying the car damage information further comprises: a third acquisition unit configured to acquire vehicle type information of a vehicle after acquiring at least one image of the vehicle including vehicle damage information; and a fourth acquisition unit configured to acquire a previously modeled three-dimensional model based on the vehicle type information, wherein the first generation unit is further configured to modify the previously modeled three-dimensional model based on the at least one image to generate a three-dimensional model of the vehicle.
In one embodiment, the apparatus for displaying damage information further includes a fifth obtaining unit configured to obtain damage information of a vehicle based on the at least one image after texture mapping at the plurality of surface positions with their respective texture images, respectively, and an adding unit configured to add information to the mapped three-dimensional model based on the damage information of the vehicle.
in one embodiment, in the apparatus for displaying vehicle damage information, the fifth obtaining unit is further configured to perform detection and identification of vehicle damage based on the at least one image using a predetermined algorithm to predict at least one of the following information: damaged parts, damaged location, type of damage, and degree of damage.
in one embodiment, in the device for displaying the car damage information, the adding unit includes at least one of the following sub-units:
A first adding subunit configured to add information for highlighting the lesion location on the mapped three-dimensional model; and
a second adding subunit configured to add information related to at least one of: damaged parts, type of damage and degree of damage.
In one embodiment, in the apparatus for displaying vehicle damage information, the first adding subunit is further configured to highlight at the damage position.
in one embodiment, in the apparatus for displaying the car damage information, the second adding subunit is further configured to display any one of the following in different color distinctively on the mapped three-dimensional model: different damaged parts, different damage types and different damage degrees.
In one embodiment, in the apparatus for displaying loss information, the second adding subunit is further configured to add text information related to at least one of the following on the mapped three-dimensional model: damaged parts, type of damage and degree of damage.
In one embodiment, in the apparatus for displaying vehicle damage information, the vehicle damage information includes a plurality of damage locations, wherein the display unit is further configured to automatically display each damage location of the mapped three-dimensional model in turn based on a distribution of the plurality of damage locations.
In one embodiment, in the apparatus for displaying vehicle damage information, the plurality of damage locations includes a first damage location, the first damage location corresponds to a first damage, wherein the display unit is further configured to display the first damage at the first damage location with an optimal viewing angle, the optimal viewing angle includes a first angle and a first distance, the first angle is an angle directly facing the first damage, and the first distance is determined based on the location and the degree of the first damage.
In one embodiment, in the apparatus for displaying damage information, the display unit is further configured to interactively display the mapped three-dimensional model.
In one embodiment, in the apparatus for displaying the car damage information, the display unit is further configured to display the three-dimensional model of the map through any one of the following display devices: planar displays, immersive panoramic displays, VR/AR head displays, and holographic projection devices.
another aspect of the present specification provides a computing device, including a memory and a processor, wherein the memory stores executable code, and the processor executes the executable code to implement any one of the above methods for displaying vehicle damage information.
according to the vehicle damage information display scheme, the three-dimensional model of the vehicle is reconstructed based on the images uploaded by the user, and the images including the damage are mapped to the model after being transformed, so that the people with the damage caused by the nuclear damage can observe the images more easily. Further, if the damage assessment is automatically carried out by using an algorithm, the damage assessment result can be displayed on the model in an enhanced mode, and therefore the working difficulty of the nuclear damage personnel is further reduced. Thereby reducing the capability requirement on the personnel with fixed nuclear damage and saving the labor cost. Meanwhile, the error rate can be reduced, and the working efficiency is improved.
Detailed Description
the embodiments of the present specification will be described below with reference to the accompanying drawings.
fig. 1 shows a schematic diagram of a system 100 for presenting damage information in accordance with an embodiment of the present description. As shown in fig. 1, the system 100 includes a modeling module 11, a texture generation module 12, a traffic impairment detection module 13, and a display module 14. The system 100 is for example a server of an insurance company. After a user (e.g., an insurer owner) uploads at least one damage image or video (hereinafter collectively referred to as damage images) to the system 100, the system 100 first inputs the damage images into the modeling module 11. In the modeling module 11, a three-dimensional model of the damaged vehicle is generated based on the vehicle damage image. The system 100 also inputs the impairment image to the texture generation module 12. In the texture generation module 12, the at least one image is mapped to the surface of the three-dimensional model based on the vehicle surface position displayed by the damage image, so as to generate the texture at the corresponding surface position of the three-dimensional model. In addition, the system 100 also inputs the loss image into the loss detection module 13. In the vehicle damage detection module 13, vehicle damage detection and identification are performed based on the vehicle damage image according to an existing algorithm to predict vehicle damage information. Then, the texture generated by the texture generation module 12 may be attached to the three-dimensional model generated by the modeling module 11, and information related to the damage information may be added to be displayed in the display module 14. In the display module 14, the three-dimensional model can be interactively displayed, and the three-dimensional model can be automatically displayed according to the car damage position. Through the system 100, the damage condition of the whole vehicle can be comprehensively, intuitively and accurately displayed to the damage assessment personnel or the damage checking personnel, so that the damage assessment personnel and the damage checking personnel can be helped to rapidly and accurately assess the damage, check the claim and the like.
It is to be understood that the system 100 shown in FIG. 1 is merely exemplary and that systems according to embodiments of the present description are not limited to the configuration shown in FIG. 1. For example, the damage detection module 13 is not necessary, and the damage information detected by it is only used to enhance the three-dimensional model.
Fig. 2 shows a flowchart of a method for displaying vehicle damage information according to an embodiment of the present disclosure. The method comprises the following steps:
at step S202, acquiring at least one image of the vehicle including the vehicle damage information;
In step S204, generating a three-dimensional model of the vehicle based on the at least one image;
At step S206, generating respective texture images at a plurality of surface locations of the three-dimensional model based on the at least one image, wherein the plurality of surface locations are determined based on the at least one image;
At step S208, texture mapping is performed at the plurality of surface locations with their respective texture images, respectively, to obtain a mapped three-dimensional model, an
In step S210, the mapped three-dimensional model is shown to show the car damage information.
First, at step S202, at least one image of the vehicle including the vehicle damage information is acquired. The image may be a still image or an image taken from a video. Typically, the at least one image is obtained by uploading at least one loss photograph or video of the accident vehicle over a network by the vehicle owner at risk. In one example, a user (e.g., an accident owner) uploads two photographs of the damage to a vehicle as shown in fig. 3 and 4 as the at least one image, where fig. 3 shows that a scratch damage is present on a part of the vehicle and fig. 4 shows a long shot of the scratch damage. Alternatively, the at least one image may be obtained by a surveyor of the insurance company taking a photograph or video at the scene of the accident. In one embodiment, if the data volume of the photo or video uploaded by the owner is too large, the photos or videos can be input into the existing model to roughly select the image more relevant to the accident as the at least one image.
In step S204, a three-dimensional model of the vehicle is generated based on the at least one image. Here, the modeling may be performed based on the at least one image using various three-dimensional modeling techniques that are known in the art. In one embodiment, a previously accurately modeled three-dimensional model of the vehicle type may be obtained based on vehicle type information of the accident vehicle provided by a user or identified by an algorithm based on the at least one image. The three-dimensional model may then be modified based on the at least one image. For example, based on the at least one image, a vehicle damage with a recess on the left front door of the accident vehicle may be acquired, in which case the corresponding position of the front door may be modified to a recessed structure on the existing three-dimensional model. In addition, on the pre-modeled three-dimensional model, the surface texture corresponding to the vehicle type can be included, so that the vehicle can be displayed more truly.
in step S206, based on the at least one image, respective texture images at a plurality of surface locations of the three-dimensional model are generated, wherein the plurality of surface locations are determined based on the at least one image. As will be understood by those skilled in the art, before texture mapping is performed on a three-dimensional model, a surface of the three-dimensional model is generally divided into a plurality of adjacent small patches, for example, triangular small patches, and then texture mapping is performed by obtaining a texture corresponding to each small patch based on a planar photograph corresponding to the three-dimensional model. In the present specification embodiment, too, based on the vehicle surface position displayed in the at least one image, the at least one image is segmented based on the triangle patch to acquire respective small images corresponding to the triangle patch at the corresponding position on the three-dimensional model. Then, the small images are converted based on the angle, position, etc. of the camera, thereby generating textures corresponding to the corresponding triangular patches on the three-dimensional model. It is to be understood that, in the embodiments of the present specification, the method of generating the texture of the three-dimensional model based on the image is not limited to the above, but various methods of generating the texture of the three-dimensional model that may be obtained by those skilled in the art may be employed.
In step S208, texture mapping is performed at the plurality of surface locations with their respective texture images, respectively, to obtain a mapped three-dimensional model. As described in step S206, each texture image corresponds to a specific triangular patch (i.e., surface position), so that, according to the corresponding relationship between each triangular patch and each texture image, the triangular patches corresponding to each texture image are mapped by using a plurality of texture images, thereby mapping the car damage information included in the at least one image onto the three-dimensional model to visually display the car damage information on the three-dimensional model.
in step S210, the mapped three-dimensional model is shown to show the car damage information. In embodiments of the present specification, the mapped three-dimensional model may be displayed by any one of the following display devices: planar displays, immersive panoramic displays, VR/AR head displays, and holographic projection devices. In one embodiment, the mapped three-dimensional model may be presented interactively, i.e. a user (an orderer or a nuclear destroyer) may rotate the three-dimensional model by touching or by mouse on a display, may scale the three-dimensional model, etc. to find injuries on a vehicle, judge the extent of injuries, determine the results of the ordnance, etc.
Fig. 5 schematically shows the presentation after texture mapping of the three-dimensional model of the accident vehicle based on fig. 3 and 4. The effect shown in FIG. 5 is only schematic, and the actual texture mapped three-dimensional model is more natural and realistic. As shown in fig. 3, to clearly show a slight scratch damage on a part, it needs to be close to a shot close-up to be seen. In the prior art, this poses the problem that the person who determines the impairment cannot distinguish from the isolated image which part of the vehicle the impairment is specifically located on. The damage determiner therefore needs to, in conjunction with other relevant images, for example fig. 4, imagine the mutual positional relationship of the vehicle and the camera when taking the image, i.e. the area in which the camera is aligned when taking the image is in the vicinity of the left rear wheel of the vehicle, so as to recognize that the damage is occurring on this part of the vehicle, the "left rear fender". This requires the damage-assessment personnel to have strong spatial imagination reasoning ability and a certain accumulation of experience. Therefore, the claim settlement scheme in the prior art is high in cost, low in efficiency and easy to make mistakes.
By the embodiment of the specification, a person with nuclear damage can observe a three-dimensional model of the vehicle, such as shown in fig. 5, wherein the texture map on the model is partially or completely obtained by converting the image uploaded by the user. When observing the vehicle model, the personnel with the fixed core loss can freely perform translation, rotation and scaling operations to change the observation visual angle. Therefore, the position and the component of the damage can be known without depending on imagination reasoning, and only the type and the degree of the damage are judged.
Fig. 6 illustrates a flow chart of a method of presenting damage information in accordance with another embodiment of the present disclosure. As shown in fig. 6, the method comprises the steps of:
at step S602, acquiring at least one image of the vehicle including the vehicle damage information;
Generating a three-dimensional model of the vehicle based on the at least one image at step S604;
In step S606, based on the at least one image, generating respective texture images at a plurality of surface locations of the three-dimensional model, wherein the plurality of surface locations are determined based on the at least one image;
In step S608, performing texture mapping at the plurality of surface positions with their respective texture images, respectively, to obtain mapped three-dimensional models;
In step S610, vehicle damage information is acquired based on the at least one image;
In step S612, adding information to the mapped three-dimensional model based on the vehicle damage information to obtain an enhanced three-dimensional model; and
in step S614, the enhanced three-dimensional model is shown to show the vehicle damage information.
In the method shown in FIG. 6, the implementation of steps S602-S608 is the same as steps S202-S208 in FIG. 2, and will not be described again here.
in step S610, vehicle damage information is acquired based on the at least one image. In one embodiment, the detection and identification of vehicle damage is performed based on the at least one image using a predetermined algorithm to predict at least one of the following: damaged parts, damaged location, type of damage, and degree of damage. The predetermined algorithm may be, for example, a pre-trained damage detection model that may output corresponding vehicle damage information based on the input images. The lesion detection model may be obtained by training using a number of calibrated images including lesions. In the embodiment of the present specification, the acquisition of the vehicle damage information in this step is not limited to the above manner, and for example, the vehicle damage information may be acquired from a text statement submitted by an accident vehicle owner, and the like.
In step S612, information is added to the mapped three-dimensional model based on the vehicle damage information to obtain an enhanced three-dimensional model. Adding information to the mapped three-dimensional model may include adding information on the mapped three-dimensional model to highlight the lesion location. For example, the lesion location may be highlighted in the three-dimensional model, or indicated with an arrow, or the like. Adding information to the mapped three-dimensional model may further comprise adding information related to at least one of: damaged parts, type of damage and degree of damage. For example, displaying any one of the following in different color distinctions on the mapped three-dimensional model: different damaged parts, different damage types and different damage degrees. Or, adding text information describing the various damage information on the three-dimensional model subjected to mapping. Alternatively, the various damage information described above may be displayed by superimposing colors and characters.
in step S614, the enhanced three-dimensional model is shown to show the vehicle damage information. In one embodiment, a model display method may be designed according to the distribution of the damage locations, for example, a plurality of damages on an accident vehicle are respectively located at a plurality of damage locations, a model display script may be generated, and each damage location of the mapped three-dimensional model is automatically displayed in turn based on the distribution of the plurality of damage locations. In one embodiment, the model can be automatically rotated, translated, and scaled to the optimal viewing angle for each lesion in turn, facilitating the review without excessive human interaction by the nuclear casualty personnel. Wherein the optimal viewing angle includes an angle and a distance relative to the damage, the angle is an angle directly facing the damage, and the distance is determined based on the position and the degree of the damage, for example, the distance is farther when the area of the position of the damage is larger, that is, the viewing angle is smaller, and the distance is closer when the degree of the damage is lighter, that is, the viewing angle is larger.
Additionally, in the method, the augmented model may also be presented interactively. In addition, the display device in the method may refer to the description of the method shown in fig. 2, and is not repeated here.
Fig. 7 shows an apparatus 700 for displaying vehicle damage information according to an embodiment of the present disclosure, including:
a first acquisition unit 71 configured to acquire at least one image of the vehicle including the vehicle damage information;
A first generating unit 72 configured to generate a three-dimensional model of the vehicle based on the at least one image;
A second generating unit 73 configured to generate, based on the at least one image, respective texture images at a plurality of surface positions of the three-dimensional model, wherein the plurality of surface positions are determined based on the at least one image;
a second obtaining unit 74 configured to perform texture mapping at the plurality of surface positions with their respective texture images, respectively, to obtain a mapped three-dimensional model; and
a presentation unit 75 configured to present the three-dimensional model of the mapped image to present the damage information.
In one embodiment, the device for displaying the car damage information further comprises: a third acquisition unit 76 configured to acquire model information of a vehicle after acquiring at least one image of the vehicle including the loss information; and a fourth obtaining unit 77 configured to obtain a previously modeled three-dimensional model based on the vehicle type information, wherein the first generating unit is further configured to modify the previously modeled three-dimensional model based on the at least one image to generate the three-dimensional model of the vehicle.
in one embodiment, the apparatus for displaying damage information further comprises a fifth obtaining unit 78 configured to obtain damage information of a vehicle based on the at least one image after texture mapping at the plurality of surface locations with their respective texture images, respectively, and an adding unit 79 configured to add information to the mapped three-dimensional model based on the damage information of the vehicle.
In one embodiment, in the apparatus for displaying vehicle damage information, the fifth obtaining unit 78 is further configured to perform detection and identification of vehicle damage based on the at least one image by using a predetermined algorithm to predict at least one of the following information: damaged parts, damaged location, type of damage, and degree of damage.
In one embodiment, in the device for displaying the car damage information, the adding unit 79 includes at least one of the following sub-units:
A first adding subunit 791 configured to add information for highlighting the lesion location on the mapped three-dimensional model; and
A second adding subunit 792 configured to add information on the mapped three-dimensional model relating to at least one of: damaged parts, type of damage and degree of damage.
In one embodiment, in the apparatus for displaying the car damage information, the first adding subunit 791 is further configured to highlight at the damage position.
in one embodiment, in the device for displaying damage information, the second adding subunit 792 is further configured to differentially display, on the mapped three-dimensional model, any one of the following in different colors: different damaged parts, different damage types and different damage degrees.
in one embodiment, in the apparatus for presenting damage information, the second adding subunit 792 is further configured to add text information relating to at least one of: damaged parts, type of damage and degree of damage.
in one embodiment, in the apparatus for displaying vehicle damage information, the vehicle damage information includes a plurality of damage locations, wherein the display unit 75 is further configured to automatically display each damage location of the mapped three-dimensional model in turn based on a distribution of the plurality of damage locations.
In one embodiment, in the apparatus for displaying vehicle damage information, the plurality of damage locations includes a first damage location, and the first damage location corresponds to a first damage, wherein the display unit 75 is further configured to display the first damage at an optimal viewing angle in the first damage location, the optimal viewing angle including a first angle and a first distance, wherein the first angle is an angle directly facing the first damage, and the first distance is determined based on the location and the degree of the first damage.
In one embodiment, in the apparatus for displaying the car damage information, the display unit 75 is further configured to interactively display the mapped three-dimensional model.
In one embodiment, in the device for displaying the car damage information, the display unit 75 is further configured to display the three-dimensional model of the map through any one of the following display devices: planar displays, immersive panoramic displays, VR/AR head displays, and holographic projection devices.
Another aspect of the present specification provides a computing device, including a memory and a processor, wherein the memory stores executable code, and the processor executes the executable code to implement any one of the above methods for displaying vehicle damage information.
According to the vehicle damage information display scheme, the three-dimensional model of the vehicle is reconstructed based on the images uploaded by the user, and the images including the damage are mapped to the model after being transformed, so that the people with the damage caused by the nuclear damage can observe the images more easily. When observing the vehicle model, the personnel with the fixed core loss can freely perform translation, rotation and scaling operations to change the observation visual angle. Therefore, the position and the component of the damage can be known without depending on imagination reasoning, and only the type and the degree of the damage are judged. Furthermore, if the algorithm is used for automatic damage assessment, damage assessment results can be displayed on the model in an enhanced mode, for example, visual guide marks, special colors, description characters are displayed in an overlapping mode, and the like, so that the working difficulty of the nuclear damage personnel is further reduced. The model display method can also be designed according to the position distribution of the damage, for example, a plurality of damages are arranged on a vehicle and are respectively positioned on a plurality of component positions, a model display script can be generated, the model is automatically and sequentially rotated, translated and scaled to the optimal observation visual angle corresponding to each damage, and the damage checking personnel can conveniently observe without excessive manual interaction. Thereby reducing the capability requirement on the personnel with fixed nuclear damage and saving the labor cost. Meanwhile, the error rate can be reduced, and the working efficiency is improved.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
It will be further appreciated by those of ordinary skill in the art that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether these functions are performed in hardware or software depends on the particular application of the solution and design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
the above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.