CN113179368B - Vehicle loss assessment data processing method and device, processing equipment and client - Google Patents

Vehicle loss assessment data processing method and device, processing equipment and client Download PDF

Info

Publication number
CN113179368B
CN113179368B CN202110345608.8A CN202110345608A CN113179368B CN 113179368 B CN113179368 B CN 113179368B CN 202110345608 A CN202110345608 A CN 202110345608A CN 113179368 B CN113179368 B CN 113179368B
Authority
CN
China
Prior art keywords
shooting
damage
damaged area
vehicle
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110345608.8A
Other languages
Chinese (zh)
Other versions
CN113179368A (en
Inventor
周凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN202110345608.8A priority Critical patent/CN113179368B/en
Publication of CN113179368A publication Critical patent/CN113179368A/en
Application granted granted Critical
Publication of CN113179368B publication Critical patent/CN113179368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators

Abstract

The embodiment of the specification discloses a vehicle damage assessment data processing method, device, processing equipment and client. The user can automatically identify the damaged part of the vehicle on the mobile device, and the region needing to be shot is identified in a shooting picture in an easily-identified mode, and the user is continuously guided to shoot a photo or video on the region, so that the user can shoot the damage-assessment required meeting the damage-assessment processing requirement under the condition that professional knowledge is not required, the processing efficiency of the damage assessment of the vehicle is improved, and the interactive experience of the damage assessment of the user is improved.

Description

Vehicle loss assessment data processing method and device, processing equipment and client
The application is applied for the date of 2018, 05 and 08, and the application number is as follows: 201810432696.3, a divisional application of a patent application entitled "a data processing method, apparatus, processing device, and client for vehicle damage".
Technical Field
The embodiment of the specification belongs to the technical field of computer terminal insurance business data processing, and particularly relates to a data processing method, device, processing equipment and client for vehicle damage assessment.
Background
Motor vehicle insurance, i.e., automobile insurance (or simply car insurance), refers to a commercial insurance that is responsible for compensating for personal casualties or property loss of motor vehicles due to natural disasters or accidents. With the development of economy, the number of motor vehicles is increasing, and currently, car insurance has become one of the biggest risks in China's property insurance business.
In the car insurance industry, when car owners apply for claims of car accidents, insurance companies need to evaluate the damage degree of the car to determine the item list to be repaired, the payment amount and the like. The current evaluation methods mainly comprise: the accident vehicles are evaluated on site by an insurance company or a third party public estimation organization surveyor, or the accident vehicles are photographed by a user under the guidance of the insurance company personnel, transmitted to the insurance company through a network, and then subjected to remote damage assessment by a damage assessment person through photos. In the current mode of acquiring the damage assessment image by vehicle risk assessment, an insurance company arranges vehicles and personnel to an accident scene for investigation, and high cost is required; the vehicle owner needs to spend more time waiting for the survey personnel to arrive at the scene, and the experience is poor; when a car owner takes a photo by himself, due to lack of experience, an investigation person is often required to conduct guidance through a remote telephone or video call and the like, which is time-consuming and labor-consuming. Even if a part of the photos shot by the method under the condition of remote guidance of the viewer have a large number of invalid photos, when invalid damage assessment images are acquired, the owner user needs to shoot again, even the shooting opportunity is lost, and damage assessment processing efficiency and user damage assessment service experience are seriously affected.
Therefore, there is a need in the art for a vehicle impairment handling scheme that is easier, faster and faster.
Disclosure of Invention
An embodiment of the present disclosure is directed to providing a method, an apparatus, a processing device, and a client for processing data of damage of a vehicle, where a user may automatically identify a damaged portion of the vehicle on a mobile device, and identify an area to be photographed in an easily identifiable manner in a photographing picture, and continuously guide the user to photograph or video the area, so that the user may complete photographing meeting a damage-assessment processing requirement required for damage assessment without expertise, thereby improving processing efficiency of damage assessment of the vehicle, and improving interactive experience of damage assessment of the user.
The method, the device, the processing equipment and the client for processing the vehicle damage assessment provided by the embodiment of the specification are realized in the following modes:
a method of data processing for vehicle impairment, the method comprising:
displaying a shooting window so as to shoot the vehicle through the shooting window;
under the condition that the damage exists in the current shooting window is identified, a new shooting strategy is started for the damaged area, and the new shooting strategy is determined after shooting parameters are adjusted according to different shooting areas;
The damaged area is photographed.
A vehicle lossy data processing apparatus comprising a processor and a memory for storing processor executable instructions, the processor implementing when executing the instructions:
displaying a shooting window so as to shoot the vehicle through the shooting window;
under the condition that the damage exists in the current shooting window is identified, a new shooting strategy is started for the damaged area, and the new shooting strategy is determined after shooting parameters are adjusted according to different shooting areas;
the damaged area is photographed.
A client comprising a processor and a memory for storing processor-executable instructions that when executed by the processor implement:
displaying a shooting window so as to shoot the vehicle through the shooting window;
under the condition that the damage exists in the current shooting window is identified, a new shooting strategy is started for the damaged area, and the new shooting strategy is determined after shooting parameters are adjusted according to different shooting areas;
the damaged area is photographed.
An electronic device comprising a display screen, a processor and a memory storing processor-executable instructions that when executed implement the method steps of any one of the embodiments of the present specification.
According to the data processing method, the device, the processing equipment and the client for vehicle damage assessment, which are provided by the embodiment of the specification, a user can automatically identify the damaged part of the vehicle on the mobile equipment, and the area needing to be shot is identified in a shooting picture in an easily-identified mode, and the user is continuously guided to shoot a photo or a video of the area, so that the user can shoot the damage assessment required to meet the damage assessment processing requirement under the condition that no professional knowledge is needed, the processing efficiency of the vehicle damage assessment is improved, and the interactive experience of the damage assessment of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some of the embodiments described in the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an embodiment of a method for processing vehicle impairment data provided in the present disclosure;
FIG. 2 is a schematic diagram of a deep neural network model used in an embodiment of the method described herein;
FIG. 3 is a schematic diagram of the present specification providing a method for identifying a lesion area using a punctuation mark rendering;
FIG. 4 is a schematic view of an implementation scenario of a shoot guidance embodiment in the method provided in the present specification;
FIG. 5 is a schematic illustration of an implementation scenario of another embodiment of the method provided in the present specification;
FIG. 6 is a block diagram of the hardware architecture of a client for interactive processing of vehicle impairment applying an embodiment of the method or apparatus of the present invention;
FIG. 7 is a schematic block diagram of an embodiment of a data processing apparatus for vehicle damage assessment provided in the present specification;
fig. 8 is a schematic structural diagram of an embodiment of an electronic device provided in the present description.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments, but not all embodiments in the present specification. All other embodiments, which can be made by one or more embodiments of the present disclosure without inventive faculty, are intended to be within the scope of the embodiments of the present disclosure.
One embodiment provided in this specification may be applied to a system architecture of a client/server. The client can comprise terminal equipment with shooting function, such as a smart phone, a tablet personal computer, intelligent wearing equipment, a special damage assessment terminal and the like, which are used by vehicle damage field personnel (can be accident vehicle owner users, or insurance company personnel or other personnel performing damage assessment processing). The client can be provided with a communication module and can be in communication connection with a remote server to realize data transmission with the server. The server may include a server on the insurance company side or a server on the damage assessment service side, and other implementation scenarios may include servers of other service sides, such as a terminal of an accessory provider having a communication link with the damage assessment service side server, a terminal of a vehicle repair shop, and the like. The server may comprise a single computer device, a server cluster formed by a plurality of servers, or a server of a distributed system. In some application scenarios, the client side can send the image data acquired by field shooting to the server in real time, the server side identifies the damage, and the identification result can be fed back to the client. In the embodiment of the processing on the server side, the processing such as damage identification is performed on the server side, and the processing speed is generally higher than that on the client side, so that the processing pressure of the client can be reduced, and the damage identification speed can be improved. Of course, this description does not exclude other embodiments where all or part of the processing described above is performed by the client side, such as real-time detection and identification of lesions by the client side.
When a user takes a loss picture or video by himself, the following problems are often faced: 1. the user does not fully understand which damaged parts need to be photographed (for example, a scratch trace is mainly on the front door, the rear door is only a small amount and is ignored by the user, but the rear door also needs to be painted, so that the damaged parts of the rear door need to be photographed); 2. the user cannot identify all injuries (e.g., a slight depression is difficult for an average person to identify with the naked eye); 3. it is difficult for the user to accurately grasp the factors such as the shooting distance, angle, and the ratio of the damaged portion in the screen. Therefore, the invention provides a data processing method for vehicle damage assessment, which can be applied to mobile equipment, and can identify the area to be shot in a shooting picture in an easily-identifiable mode, and continuously guide a user to shoot pictures or videos of the area, so that the user can finish shooting required by damage assessment without professional knowledge.
The following describes embodiments of the present disclosure by taking a specific application scenario of a mobile phone client as an example. Specifically, fig. 1 is a schematic flow chart of an embodiment of a method for processing vehicle damage assessment according to the present disclosure. Although the description provides methods and apparatus structures as shown in the examples or figures described below, more or fewer steps or modular units may be included in the methods or apparatus, whether conventionally or without inventive effort. In the steps or the structures where there is no necessary causal relationship logically, the execution order of the steps or the module structure of the apparatus is not limited to the execution order or the module structure shown in the embodiments or the drawings of the present specification. The described methods or module structures may be implemented in a device, server or end product in practice, in a sequential or parallel fashion (e.g., parallel processor or multi-threaded processing environments, or even distributed processing, server cluster implementations) as shown in the embodiments or figures. Of course, the following description of the embodiments does not limit other extensible technical solutions based on the present specification. Such as in other implementation scenarios. In a specific embodiment, as shown in fig. 1, in an embodiment of a method for processing data of vehicle damage assessment provided in the present specification, the method may include:
S0: displaying shooting guide information of shooting a first damaged area of the vehicle;
s2: if the first damage exists in the current shooting window, determining a first damage area of the first damage;
s4: after the first damaged area is rendered in a remarkable mode, the rendered first damaged area is overlapped and displayed in the current shooting window by utilizing the enhancement reality;
s6: and displaying shooting guide information aiming at the first damaged area.
In this embodiment, the client on the user side may be a smart phone, and the smart phone may have a shooting function. The user can open the mobile phone application implementing the embodiment of the specification at the vehicle accident scene to carry out view shooting on the vehicle accident scene. After the client opens the application, a shooting window can be displayed on a display screen of the client, and the vehicle is shot through the shooting window. The shooting window can be a video shooting window, can be used for framing (image acquisition) of a terminal on a vehicle damage site, and image information acquired through a shooting device integrated by a client can be displayed in the shooting window. The specific interface structure of the shooting window and the displayed related information can be designed in a self-defined manner.
The characteristic data of the vehicle can be acquired in the vehicle shooting process. The characteristic data can be specifically set according to data processing requirements such as vehicle identification, environment identification, image identification and the like. In general, the feature data may include data information of each component of the identified vehicle, and may be used to construct 3D coordinate information, and to construct an augmented reality space model (AR space model, a data representation mode, and a contour graph of the subject) of the vehicle. Of course, the characteristic data may also include other data information such as the brand, model, color, outline, unique identification code, etc. of the vehicle.
When the client starts the damage assessment service, guiding information for shooting the damaged area can be displayed. For convenience of description, a damaged area currently or initially to be photographed is referred to as a first damaged area. For example, in one application instance, when a user initiates a damage assessment service application, the application may prompt the user to take a photograph of an azimuth in which the vehicle may be damaged, over a distance that enables a clear view of the vehicle. If necessary, the user can be prompted to move around the vehicle body, and if no damage is found during initial shooting, the user is prompted to comprehensively shoot the vehicle in reverse. When it is identified that there is a damage in the current shooting window (which may be referred to as a first damage at this time), a damage area corresponding to the damage may be further calculated and determined.
In some embodiments of the present disclosure, the damage identification process may be performed by the client side or the server side, and the server may be referred to as a damage identification server. In some application scenarios or under the condition of permission of computing power, the image collected by the client can directly perform damage identification on the client or other damage determination data processing, so that network transmission overhead can be reduced. Of course, as previously mentioned, the server side is typically more computationally intensive than the client. Thus, in another embodiment of the method provided in the present specification, the processing of the damage identification may be performed by the server side. Specifically, the identifying that the first damage exists in the current shooting window may include:
s20: sending the acquired image obtained by shooting to a damage identification server;
s22: and receiving a damage identification result returned by the server, wherein the damage identification result comprises a processing result obtained by the damage identification server for carrying out damage identification on the acquired image by utilizing a pre-trained deep neural network.
In this embodiment, the first damage identification process is performed on the current damage identification process, and the first process does not limit the damage identification process on the images acquired by other damage.
In the above embodiment, the client side or the server side may identify the lesion in the image, such as the lesion location, the lesion part, the lesion type, and the like, using the deep neural network built in advance or by training in real time.
The deep neural network can be used for target detection and semantic segmentation, and for an input picture, the position of a target in the picture is found. FIG. 2 is a schematic diagram of a deep neural network model used in an embodiment of the method described herein. The depth neural network is shown in fig. 2, which is a typical depth neural network, and the range of the damaged area can be given for the pictures of various orientations and illumination conditions of the vehicle by training a depth neural network by marking a large number of pictures of the damaged area in advance. In addition, in some embodiments of the present description, a network architecture tailored to the mobile device may be used, such as one based on the exemplary MobileNet, squeezeNet or modifications thereof, so that the model can operate in a lower power, less memory, slower processor environment of the mobile device, such as the mobile terminal operating environment of the client.
After the first damage area is determined, the area can be rendered in a remarkable mode, and the area covered by the damage is overlaid and rendered in a shooting picture through an AR technology. The remarkable rendering mode mainly refers to marking a damaged area by using a rendering mode with some characteristics in a shooting picture, so that the damaged area is easy to identify or more prominent. In this embodiment, a specific rendering mode is not limited, and a constraint condition or a satisfaction condition for achieving rendering in a significant manner may be specifically set.
In another embodiment of the method provided in the present specification, the significantly rendering may include:
s40: identifying the first damaged area by adopting a preset characterization symbol, wherein the preset characterization symbol comprises one of the following steps:
dots, guidelines, regular graphic frames, irregular graphic frames, custom graphics.
Fig. 3 is a schematic diagram of the present specification providing a method for identifying a damaged area using a dot symbol rendering. Of course, in other embodiments, the preset token may further include other forms, such as a guide line, a regular graphic frame, an irregular graphic frame, a custom graphic, etc., and in other embodiments, the damaged area may be identified by using characters, data, etc., to guide the user to shoot the damaged area. One or more preset token symbols may be used in rendering. In this embodiment, the damage area is identified by using the preset characterization symbol, so that the position area where the damage is located can be more obviously displayed in the shooting window, and the user is assisted to quickly locate and guide shooting.
In another embodiment of the method provided in the present specification, the damaged area may be identified by using a dynamic rendering effect, so as to guide the user to shoot the damaged area in a more obvious manner. Specifically, in another embodiment, the rendering in a significant manner includes:
S400: and performing at least one animation display of color conversion, size conversion, rotation and jumping on the preset characterization symbol.
In some embodiments of the present disclosure, the boundaries of the AR superimposed reality injury may be aggregated, prompting the user to take a photograph of the portion of the variable cross section with the viewfinder. The augmented reality AR generally refers to a technical implementation scheme for calculating the position and angle of a camera image in real time and adding corresponding images, videos and 3D models, and the scheme can cover a virtual world on a screen in the real world and perform interaction. The enhanced information space model constructed by using the feature data in the embodiment of the present disclosure may be contour information of a vehicle, and specifically may construct a contour of the vehicle based on the acquired model number, shooting angle, and a plurality of feature data such as a tire position, a ceiling position, a front face position, a headlight position, a taillight position, and a front-rear window position of the vehicle. The profile may include a data model built based on 3D coordinates with corresponding 3D coordinate information. The constructed outline may then be presented in a photographic window. Of course, the present description does not exclude that the augmented reality space model described in other embodiments may also include other model forms or other model information added above the contours.
The AR model may be matched with the actual vehicle position in the photographing duration, for example, the constructed 3D contour is superimposed on the contour position of the actual vehicle, and the matching may be considered complete when the two are completely matched or the matching degree reaches a threshold value. In a specific matching process, the user can align the constructed contour with the contour of the photographed real vehicle by guiding the viewing direction, and by guiding the movement of the photographing direction or angle. According to the embodiment of the specification, the augmented reality technology is combined, the real information of the vehicle shot by the actual client of the user is displayed, the constructed augmented reality space model information of the vehicle is displayed at the same time, and the two information are mutually supplemented and overlapped, so that better loss assessment service experience can be provided.
The shooting window combined with the AR space model can more intuitively show the on-site condition of the vehicle, and can effectively perform damage assessment and shooting guidance on the damaged position of the vehicle. The client may perform a lesion recognition guide in the AR scene, and the lesion recognition guide may specifically include a shot guide information to be presented, which is determined based on image information acquired from the shot window. The client can acquire image information in the AR scene in the shooting window, analyze and calculate the acquired image information, and determine what shooting guide information needs to be displayed in the shooting window according to an analysis result. For example, the vehicle is located farther in the current photographing window, and the user may be prompted to get closer to the photographing in the photographing window. If the shooting position is far left and the tail of the vehicle cannot be shot, shooting guide information can be displayed, and a user is prompted to shift the shooting angle rightwards. The corresponding policy or rule may be preset in advance for the data information of the specific process of the damage identification guide and what shooting guide information is displayed under what conditions, and this embodiment will not be described one by one.
In this embodiment, shooting guidance information for the first damaged area may be presented. Specifically, shooting guide information to be displayed can be determined according to current shooting information and the position information of the first damaged area. For example, if a scratch exists on the rear fender of the vehicle, the scratch needs to be shot in the front direction and shot along the scratch direction, but the user is shot at 45 degrees obliquely at this time according to the current shooting position and angle information, and the distance from the scratch position is far. The user can be prompted to approach the scratch position at this time, and the user is prompted to take a photograph in front of and along the scratch direction. The shooting guide information can be adjusted in real time according to the current view finding, for example, if the user is close to the scratch position and meets the shooting requirement, the shooting guide information prompting the user to be close to the scratch position can not be displayed. The suspected damage can be identified by the client or server side.
The shooting guide information required to be displayed during specific shooting, shooting conditions and the like can be correspondingly set according to the damage assessment interactive design or damage assessment processing requirements. In one embodiment provided in the present specification, the photographing guiding information may include at least one of:
Adjusting the shooting direction;
adjusting a shooting angle;
adjusting the shooting distance;
and adjusting shooting light.
An example of a shot guide is shown in fig. 4. The user can more conveniently and efficiently conduct damage assessment processing through shooting guide information in real time. The user shoots according to shooting guide information, professional shooting skills or complicated shooting operations are not needed, and user experience is better. The above embodiments describe the shooting guide information displayed by text, and in an extensible embodiment, the shooting guide information may further include a display manner of images, voice, animation, vibration, and the like, and the current shooting picture is aligned to a certain area by an arrow or a voice prompt. Thus, in another embodiment of the method, the form of the shooting guide information presented in the current shooting window includes at least one of a symbol, a text, a voice, an animation, a video, and a vibration.
In another embodiment of the method, when the user shoots the camera of the mobile device at the vehicle, the shooting can be performed at a certain frame rate (e.g. 15 frames/s), and then the images can be identified using the trained deep neural network. If the damage is detected, a new shooting strategy can be started for the damaged area, such as accelerating the shooting frame rate (e.g. 30 frames/s), and other parameters are acquired and adjusted to continuously acquire the position of the area in the current shooting window with higher speed and lower power consumption. Therefore, shooting parameters can be adjusted according to different shooting areas, different shooting strategies are used, different shooting scenes can be flexibly adapted, shooting of key areas is enhanced, and power consumption can be reduced by frequency reduction of corresponding non-key areas. Thus, in another embodiment of the method provided herein, upon identifying the presence of a lesion in the current photographing window, a photographing strategy is employed to photograph the lesion area that adjusts at least parameters including a photographing frame rate.
Of course, other parameters such as exposure, brightness, etc. may also be adjusted. The specific shooting strategy can be set according to shooting scenes in a self-defined manner.
Further, after enough photos or videos (meeting the requirement of acquiring the damage-assessment image) are acquired in the first damage area, the user can be prompted to conduct shooting of the next damage until all the damage shooting is completed. Therefore, after a user shoots a certain injury, the user can be continuously guided to shoot the next injury, damage shooting omission is reduced, participation of user injury identification is reduced, and user experience is improved. Thus, in another embodiment of the method, as shown in fig. 5, the method may further include:
s8: and if the first damaged area is determined to be shot completely, displaying shooting guide information for shooting a second damaged area of the vehicle until the identified damage is shot completely.
The client application may return the captured damage image to the insurance company for subsequent manual or automatic damage assessment. The risk of fraudulent use of the image by the user's falsification of the image can also be avoided or reduced. Thus, in another embodiment of the method provided herein, the method further comprises:
S10: and transmitting the shot image meeting the acquisition requirement of the damage assessment image to a damage assessment server.
The damage assessment server can comprise a server on the side of an insurance company, and also can comprise a server of a damage assessment server. The transmission to the impairment server may include direct transmission to the impairment server by the client, or indirect transmission to the impairment server. Of course, the determined loss images meeting the requirements can also be sent to the loss server and the server of the loss assessment server side, such as the server side of the loss assessment service provided by a certain payment application.
It should be noted that, the real-time described in the foregoing embodiment may include sending, receiving, or displaying a certain data information immediately after obtaining or determining the certain data information, and those skilled in the art will understand that sending, receiving, or displaying after buffering or expected calculation, waiting time may still fall within the definition of the real-time. The images described in the embodiments of the present description may include video, which may be regarded as a continuous set of images.
In addition, in the embodiment of the present disclosure, the acquired captured image or the loss image meeting the requirements may be stored in a local client or uploaded to a remote server in real time. After the local client side stores some data to be tampered or uploaded to the server for storage, the damage assessment data can be effectively prevented from being tampered or the insurance fraud performed by stealing the data of other non-current accident images. Therefore, the embodiment of the specification can also improve the data security of the damage assessment process and the reliability of the damage assessment result.
The above embodiments describe embodiments of a data processing method for a user to perform vehicle damage assessment at a mobile phone client. It should be noted that the method described in the embodiments of the present disclosure may be implemented in a variety of processing devices, such as a dedicated damage assessment terminal, and an implementation scenario including a client and server architecture.
In the present specification, each embodiment of the method is described in a progressive manner, and identical and similar parts of each embodiment are referred to each other, and each embodiment mainly describes differences from other embodiments. For relevance, see the description of the method embodiments.
The method embodiment provided by the embodiment of the application can be executed in a mobile terminal, a PC terminal, a special damage assessment terminal, a server or similar computing devices. Taking the operation on the mobile terminal as an example, fig. 6 is a hardware structure block diagram of a client for interactive processing of vehicle damage by applying an embodiment of the method or the device of the present application. As shown in fig. 6, the client 10 may include one or more (only one is shown in the figure) processors 102 (the processors 102 may include, but are not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA), a memory 104 for storing data, and a transmission module 106 for communication functions. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 6 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, client 10 may also include more or fewer components than shown in FIG. 6, for example, may also include other processing hardware, such as a GPU (Graphics Processing Unit, image processor), or have a different configuration than shown in FIG. 8.
The memory 104 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the search method in the embodiment of the present disclosure, and the processor 102 executes the software programs and modules stored in the memory 104, thereby executing various functional applications and data processing, that is, implementing the processing method for displaying the content of the navigation interactive interface. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 104 may further include memory remotely located with respect to processor 102, which may be connected to client 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission module 106 is used to receive or transmit data via a network. The specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission module 106 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission module 106 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
Based on the image object positioning method, the specification also provides a vehicle damage assessment data processing device. The apparatus may comprise a system (including a distributed system), software (applications), modules, components, servers, clients, etc. that employ the methods described in the embodiments of the present specification in combination with the necessary equipment means to implement the hardware. Based on the same innovative concept, the processing device in one embodiment provided in the present specification is described in the following embodiments. Because the implementation scheme and the method for solving the problem by the device are similar, the implementation of the specific processing device in the embodiment of the present disclosure may refer to the implementation of the foregoing method, and the repetition is omitted. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated. Specifically, as shown in fig. 7, fig. 7 is a schematic block diagram of an embodiment of a data processing apparatus for vehicle damage assessment provided in the present specification, which may specifically include:
the first prompt module 201 may be used for displaying shooting guide information for shooting a first damaged area of the vehicle;
the damage identification result module 202 may be configured to determine a first damage area of a first damage if the first damage is identified to exist in a current shooting window;
The salient display module 203 may be configured to display the rendered first damaged area in the current shooting window in a superimposed manner by using the enhanced reality after the first damaged area is rendered in a salient manner;
the second prompting module 204 may be configured to display shooting guidance information for the first damaged area.
It should be noted that, in the foregoing embodiments of the foregoing apparatus, the description of the embodiments according to the related methods may further include other implementations, such as a rendering processing module that performs rendering, an AR display module that performs AR processing, and so on. Specific implementation may refer to description of method embodiments, and are not described herein in detail.
The device model identification method provided in the embodiments of the present disclosure may be implemented in a computer by executing corresponding program instructions by a processor, for example, implemented on a PC side/server side using the c++/java language of the windows/Linux operating system, or implemented by other hardware necessary for an application design language set corresponding to, for example, an android, iOS system, or implemented by processing logic based on a quantum computer, etc. In particular, in an embodiment of the method implemented by a vehicle damage assessment data processing device provided in the present disclosure, the processing device may include a processor and a memory for storing instructions executable by the processor, where the processor implements:
Displaying shooting guide information of shooting a first damaged area of the vehicle;
if the first damage exists in the current shooting window, determining a first damage area of the first damage;
after the first damaged area is rendered in a remarkable mode, the rendered first damaged area is overlapped and displayed in the current shooting window by utilizing the enhancement reality;
and displaying shooting guide information aiming at the first damaged area.
Based on the foregoing description of the method embodiments, in another embodiment of the processing device, the processor further performs:
and if the first damaged area is determined to be shot completely, displaying shooting guide information for shooting a second damaged area of the vehicle until the identified damage is shot completely.
Based on the foregoing method embodiment description, in another embodiment of the processing device, the salient mode rendering includes:
identifying the first damaged area by adopting a preset characterization symbol, wherein the preset characterization symbol comprises one of the following steps:
dots, guidelines, regular graphic frames, irregular graphic frames, custom graphics.
Based on the foregoing method embodiment description, in another embodiment of the processing device, the salient mode rendering includes:
And performing at least one animation display of color conversion, size conversion, rotation and jumping on the preset characterization symbol.
In another embodiment of the processing device, the shooting guide information includes at least one of the following:
adjusting the shooting direction;
adjusting a shooting angle;
adjusting the shooting distance;
and adjusting shooting light.
In another embodiment of the processing device, the form of the shooting guide information displayed in the current shooting window includes at least one of a symbol, a word, a voice, an animation, a video, and a vibration.
Based on the foregoing description of the method embodiment, in another embodiment of the processing device, the identifying, by the processor, that the first lesion exists in the current capturing window includes:
sending the acquired image obtained by shooting to a damage identification server;
and receiving a damage identification result returned by the server, wherein the damage identification result comprises a processing result obtained by the damage identification server for carrying out damage identification on the acquired image by utilizing a pre-trained deep neural network.
In another embodiment of the processing device, when it is identified that there is a lesion in the current shooting window, the processor performs shooting of the lesion area using a shooting strategy that adjusts at least parameters including a shooting frame rate.
Based on the foregoing description of the method embodiments, in another embodiment of the processing device, the processor further performs:
and transmitting the shot image meeting the acquisition requirement of the damage assessment image to a damage assessment server.
It should be noted that, in the foregoing embodiments, the processing apparatus described in the foregoing embodiments, the description according to the related method embodiment may further include other extensible embodiments. Specific implementation may refer to description of method embodiments, and are not described herein in detail.
The instructions described above may be stored in a variety of computer-readable storage media. The computer readable storage medium may include physical means for storing information, where the information may be stored electronically, magnetically, or optically, etc. The computer readable storage medium according to the present embodiment may include: means for storing information using electrical energy such as various memories, e.g., RAM, ROM, etc.; devices for storing information using magnetic energy such as hard disk, floppy disk, magnetic tape, magnetic core memory, bubble memory, and USB flash disk; devices for optically storing information, such as CDs or DVDs. Of course, there are other ways of readable storage medium, such as quantum memory, graphene memory, etc. Instructions in an apparatus or server or client or system described in embodiments of the present specification are described above.
The method or the device embodiment can be used for a client side of a user, such as a smart phone. Accordingly, the present specification provides a client comprising a processor and a memory for storing processor executable instructions that when executed by the processor implement:
displaying shooting guide information of shooting a first damaged area of the vehicle;
if the first damage exists in the current shooting window, determining a first damage area of the first damage;
after the first damaged area is rendered in a remarkable mode, the rendered first damaged area is overlapped and displayed in the current shooting window by utilizing the enhancement reality;
and displaying shooting guide information aiming at the first damaged area.
Based on the foregoing, the embodiments of the present disclosure also provide an electronic device including a display screen, a processor, and a memory storing instructions executable by the processor.
Fig. 8 is a schematic structural diagram of an embodiment of an electronic device provided in the present specification, where the processor may implement the method steps described in any one of the embodiments of the present specification when executing the instructions.
The embodiments of the device, the client, the electronic device and the like described in the specification are all described in a progressive manner, and the same and similar parts among the embodiments are all referred to each other, and each embodiment is mainly described in a different way from other embodiments. In particular, for a hardware+program class embodiment, the description is relatively simple, as it is substantially similar to the method embodiment, as relevant see the partial description of the method embodiment.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Although the application provides method operational steps as described in the examples or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented by an actual device or client product, the instructions may be executed sequentially or in parallel (e.g., in a parallel processor or multi-threaded processing environment) as shown in the embodiments or figures.
Although the present embodiment refers to the operations and data descriptions such as AR technology, presentation of photographing guidance information, photographing guidance for interaction with a user, preliminary identification of a damaged position using a deep neural network, etc., position arrangement, interaction, calculation, judgment, etc., the present embodiment is not limited to the case where it is necessary to conform to industry communication standards, standard image data processing protocols, communication protocols, and standard data models/templates, or the present embodiment. Some industry standards or embodiments modified slightly based on the implementation described by the custom manner or examples can also realize the same, equivalent or similar or predictable implementation effect after modification of the above examples. Examples of data acquisition, storage, judgment, processing, etc., using these modifications or variations are still within the scope of alternative embodiments of the present description.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a car-mounted human-computer interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Although the present description provides method operational steps as described in the examples or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in an actual device or end product, the instructions may be executed sequentially or in parallel (e.g., in a parallel processor or multi-threaded processing environment, or even in a distributed data processing environment) as illustrated by the embodiments or by the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, it is not excluded that additional identical or equivalent elements may be present in a process, method, article, or apparatus that comprises a described element.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, when implementing the embodiments of the present disclosure, the functions of each module may be implemented in the same or multiple pieces of software and/or hardware, or a module that implements the same function may be implemented by multiple sub-modules or a combination of sub-units, or the like. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller can be regarded as a hardware component, and means for implementing various functions included therein can also be regarded as a structure within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description embodiments may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present embodiments may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments. In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the embodiments of the present specification. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
The foregoing is merely an example of an embodiment of the present disclosure and is not intended to limit the embodiment of the present disclosure. Various modifications and variations of the illustrative embodiments will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of the embodiments of the present specification, should be included in the scope of the claims of the embodiments of the present specification.

Claims (9)

1. A method of processing vehicle impairment data, comprising:
displaying a shooting window so as to shoot the vehicle through the shooting window;
under the condition that the damage exists in the current shooting window is identified, a new shooting strategy is started for the damaged area, and the new shooting strategy is determined after shooting parameters are adjusted according to different shooting areas;
shooting the damaged area;
wherein after showing the shooting window, still include:
when the damage assessment service is started, shooting guide information of a first damaged area of the vehicle is displayed;
correspondingly, under the condition that the damage exists in the current shooting window, starting a new shooting strategy for the damaged area, wherein the method comprises the following steps:
if the first damage exists in the current shooting window, determining a first damage area of the first damage;
After the first damaged area is rendered in a remarkable mode, the rendered first damaged area is displayed in the current shooting window in a superposition mode by using augmented reality;
adjusting shooting parameters to obtain a new shooting strategy aiming at the first damaged area;
and displaying shooting guide information aiming at the first damaged area according to the new shooting strategy.
2. The method as recited in claim 1, further comprising:
and if the first damaged area is determined to be shot completely, displaying shooting guide information for shooting a second damaged area of the vehicle until the identified damage is shot completely.
3. The method of claim 1, wherein the significantly rendering comprises:
identifying the first damaged area by adopting a preset characterization symbol, wherein the preset characterization symbol comprises one of the following steps:
dots, guidelines, regular graphic frames, irregular graphic frames, custom graphics.
4. The method of claim 3, wherein the significantly rendering comprises:
and performing at least one animation display of color conversion, size conversion, rotation and jumping on the preset characterization symbol.
5. The method of claim 1, wherein the photographing guide information includes at least one of:
adjusting the shooting direction;
adjusting a shooting angle;
adjusting the shooting distance;
and adjusting shooting light.
6. The method of claim 1, wherein the form of the photographing guide information displayed in the current photographing window includes at least one of a symbol, a word, a voice, an animation, a video, and a vibration.
7. The method of claim 1, wherein identifying that a first lesion exists in the current capture window comprises:
sending the acquired image obtained by shooting to a damage identification server;
and receiving a damage identification result returned by the damage identification server, wherein the damage identification result comprises a processing result obtained by the damage identification server performing damage identification on the acquired image by utilizing a pre-trained deep neural network.
8. The method of claim 1, wherein the shooting parameters comprise at least one of: shooting frame rate, exposure, brightness.
9. The method as recited in claim 1, further comprising:
and transmitting the shot image meeting the acquisition requirement of the damage assessment image to a damage assessment server.
CN202110345608.8A 2018-05-08 2018-05-08 Vehicle loss assessment data processing method and device, processing equipment and client Active CN113179368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110345608.8A CN113179368B (en) 2018-05-08 2018-05-08 Vehicle loss assessment data processing method and device, processing equipment and client

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110345608.8A CN113179368B (en) 2018-05-08 2018-05-08 Vehicle loss assessment data processing method and device, processing equipment and client
CN201810432696.3A CN108632530B (en) 2018-05-08 2018-05-08 Data processing method, device and equipment for vehicle damage assessment, client and electronic equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201810432696.3A Division CN108632530B (en) 2018-05-08 2018-05-08 Data processing method, device and equipment for vehicle damage assessment, client and electronic equipment

Publications (2)

Publication Number Publication Date
CN113179368A CN113179368A (en) 2021-07-27
CN113179368B true CN113179368B (en) 2023-10-27

Family

ID=63695894

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201810432696.3A Active CN108632530B (en) 2018-05-08 2018-05-08 Data processing method, device and equipment for vehicle damage assessment, client and electronic equipment
CN202110345608.8A Active CN113179368B (en) 2018-05-08 2018-05-08 Vehicle loss assessment data processing method and device, processing equipment and client

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201810432696.3A Active CN108632530B (en) 2018-05-08 2018-05-08 Data processing method, device and equipment for vehicle damage assessment, client and electronic equipment

Country Status (3)

Country Link
CN (2) CN108632530B (en)
TW (1) TW201947452A (en)
WO (1) WO2019214319A1 (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017123761A1 (en) * 2016-01-15 2017-07-20 Irobot Corporation Autonomous monitoring robot systems
CN108632530B (en) * 2018-05-08 2021-02-23 创新先进技术有限公司 Data processing method, device and equipment for vehicle damage assessment, client and electronic equipment
CN109447171A (en) * 2018-11-05 2019-03-08 电子科技大学 A kind of vehicle attitude classification method based on deep learning
CN110245552B (en) * 2019-04-29 2023-07-18 创新先进技术有限公司 Interactive processing method, device, equipment and client for vehicle damage image shooting
CN110427810B (en) * 2019-06-21 2023-05-30 北京百度网讯科技有限公司 Video damage assessment method, device, shooting end and machine-readable storage medium
CN110659567B (en) * 2019-08-15 2023-01-10 创新先进技术有限公司 Method and device for identifying damaged part of vehicle
CN113038018B (en) * 2019-10-30 2022-06-28 支付宝(杭州)信息技术有限公司 Method and device for assisting user in shooting vehicle video
BR112022010837A2 (en) * 2019-12-02 2022-09-13 Click Ins Ltd SYSTEMS, METHODS AND PROGRAMS TO GENERATE IMPRESSION OF DAMAGE IN A VEHICLE
CN111489433B (en) * 2020-02-13 2023-04-25 北京百度网讯科技有限公司 Method and device for positioning damage of vehicle, electronic equipment and readable storage medium
CN111368752B (en) * 2020-03-06 2023-06-02 德联易控科技(北京)有限公司 Vehicle damage analysis method and device
CN111475157B (en) * 2020-03-16 2024-04-19 中保车服科技服务股份有限公司 Image acquisition template management method, device, storage medium and platform
CN111340974A (en) * 2020-04-03 2020-06-26 北京首汽智行科技有限公司 Method for recording damaged part of shared automobile
CN112492105B (en) * 2020-11-26 2022-04-15 深源恒际科技有限公司 Video-based vehicle appearance part self-service damage assessment acquisition method and system
CN112712498A (en) * 2020-12-25 2021-04-27 北京百度网讯科技有限公司 Vehicle damage assessment method and device executed by mobile terminal, mobile terminal and medium
CN113033372B (en) * 2021-03-19 2023-08-18 北京百度网讯科技有限公司 Vehicle damage assessment method, device, electronic equipment and computer readable storage medium
CN113486725A (en) * 2021-06-11 2021-10-08 爱保科技有限公司 Intelligent vehicle damage assessment method and device, storage medium and electronic equipment
CN113256778B (en) * 2021-07-05 2021-10-12 爱保科技有限公司 Method, device, medium and server for generating vehicle appearance part identification sample
KR102366017B1 (en) * 2021-07-07 2022-02-23 쿠팡 주식회사 Method and apparatus for providing information for service on installation
CN113840085A (en) * 2021-09-02 2021-12-24 北京城市网邻信息技术有限公司 Vehicle source information acquisition method and device, electronic equipment and readable medium
CN113866167A (en) * 2021-09-13 2021-12-31 北京逸驰科技有限公司 Tire detection result generation method, computer equipment and storage medium
CN114245055B (en) * 2021-12-08 2024-04-26 深圳位置网科技有限公司 Method and system for video call under emergency call condition
CN115174885A (en) * 2022-06-28 2022-10-11 深圳数位大数据科技有限公司 AR terminal-based offline scene information acquisition method, platform, system and medium
CN117455466B (en) * 2023-12-22 2024-03-08 南京三百云信息科技有限公司 Method and system for remote evaluation of automobile

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106062805A (en) * 2013-10-15 2016-10-26 奥达特克斯北美公司 A mobile system for generating a damaged vehicle insurance estimate
CN107194323A (en) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN107360365A (en) * 2017-06-30 2017-11-17 盯盯拍(深圳)技术股份有限公司 Image pickup method, filming apparatus, terminal and computer-readable recording medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9723251B2 (en) * 2013-04-23 2017-08-01 Jaacob I. SLOTKY Technique for image acquisition and management
US9491355B2 (en) * 2014-08-18 2016-11-08 Audatex North America, Inc. System for capturing an image of a damaged vehicle
US11361380B2 (en) * 2016-09-21 2022-06-14 Allstate Insurance Company Enhanced image capture and analysis of damaged tangible objects
CN107358596B (en) * 2017-04-11 2020-09-18 阿里巴巴集团控股有限公司 Vehicle loss assessment method and device based on image, electronic equipment and system
CN107368776B (en) * 2017-04-28 2020-07-03 阿里巴巴集团控股有限公司 Vehicle loss assessment image acquisition method and device, server and terminal equipment
CN108665373B (en) * 2018-05-08 2020-09-18 阿里巴巴集团控股有限公司 Interactive processing method and device for vehicle loss assessment, processing equipment and client
CN108632530B (en) * 2018-05-08 2021-02-23 创新先进技术有限公司 Data processing method, device and equipment for vehicle damage assessment, client and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106062805A (en) * 2013-10-15 2016-10-26 奥达特克斯北美公司 A mobile system for generating a damaged vehicle insurance estimate
CN107194323A (en) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN107360365A (en) * 2017-06-30 2017-11-17 盯盯拍(深圳)技术股份有限公司 Image pickup method, filming apparatus, terminal and computer-readable recording medium

Also Published As

Publication number Publication date
CN108632530A (en) 2018-10-09
CN113179368A (en) 2021-07-27
CN108632530B (en) 2021-02-23
WO2019214319A1 (en) 2019-11-14
TW201947452A (en) 2019-12-16

Similar Documents

Publication Publication Date Title
CN113179368B (en) Vehicle loss assessment data processing method and device, processing equipment and client
CN108665373B (en) Interactive processing method and device for vehicle loss assessment, processing equipment and client
US20200364802A1 (en) Processing method, processing apparatus, user terminal and server for recognition of vehicle damage
CN110245552B (en) Interactive processing method, device, equipment and client for vehicle damage image shooting
CN113810587B (en) Image processing method and device
US9633479B2 (en) Time constrained augmented reality
TWI715932B (en) Vehicle damage identification processing method and its processing device, data processing equipment for vehicle damage assessment, damage assessment processing system, client and server
KR20210058887A (en) Image processing method and device, electronic device and storage medium
CN112200187A (en) Target detection method, device, machine readable medium and equipment
CN110136091B (en) Image processing method and related product
US9778750B2 (en) Hand-gesture-based region of interest localization
CN111340048B (en) Image processing method and device, electronic equipment and storage medium
CN110059623B (en) Method and apparatus for generating information
WO2023168957A1 (en) Pose determination method and apparatus, electronic device, storage medium, and program
CN114267041A (en) Method and device for identifying object in scene
US11823433B1 (en) Shadow removal for local feature detector and descriptor learning using a camera sensor sensitivity model
CN111553865B (en) Image restoration method and device, electronic equipment and storage medium
CN115471647A (en) Pose estimation method and device, electronic equipment and storage medium
Kapoor et al. Deep convolutional neural network in smart assistant for blinds
CN114065928A (en) Virtual data generation method and device, electronic equipment and storage medium
CN116977485A (en) Image processing method, device, equipment, medium and program product
CN116386144A (en) Training method of 3D gesture detection model and 3D gesture detection method
CN116091866A (en) Video object segmentation model training method and device, electronic equipment and storage medium
CN117011439A (en) Image reconstruction method, image reconstruction device, computer equipment, storage medium and product
CN117437429A (en) Image data processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant