CN110650292B - Method and device for assisting user in shooting vehicle video - Google Patents

Method and device for assisting user in shooting vehicle video Download PDF

Info

Publication number
CN110650292B
CN110650292B CN201911046418.5A CN201911046418A CN110650292B CN 110650292 B CN110650292 B CN 110650292B CN 201911046418 A CN201911046418 A CN 201911046418A CN 110650292 B CN110650292 B CN 110650292B
Authority
CN
China
Prior art keywords
shooting
current frame
image
vehicle
photographing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911046418.5A
Other languages
Chinese (zh)
Other versions
CN110650292A (en
Inventor
郭昕
程远
王清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN201911046418.5A priority Critical patent/CN110650292B/en
Priority to CN202110313504.9A priority patent/CN113038018B/en
Publication of CN110650292A publication Critical patent/CN110650292A/en
Priority to PCT/CN2020/110735 priority patent/WO2021082662A1/en
Application granted granted Critical
Publication of CN110650292B publication Critical patent/CN110650292B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification provides a method and a device for assisting a user in shooting a vehicle video, and on one hand, the effectiveness of a single frame in the shot video as an image can be detected in real time. And if the single frame is a valid frame, further judging whether the current frame is used as a frame in the car inspection video and accords with a car inspection video shooting rule. And under the condition that the current frame is invalid or does not accord with the car-inspection video shooting rule, a shooting guide strategy can be provided for the user in time. So, can make the common user can correctly shoot effectual car video of checking, improve user experience to and the efficiency of checking the car.

Description

Method and device for assisting user in shooting vehicle video
Technical Field
One or more embodiments of the present disclosure relate to the field of computer technology, and more particularly, to a method and apparatus for assisting a user in capturing a vehicle video.
Background
In a traditional car insurance claim settlement scene, car inspection is often performed by professional survey personnel of an insurance business party. Because the loss needs to be surveyed and fixed by manpower on the accident site, the insurance business side needs to invest a large amount of labor cost and the training cost of professional knowledge. From the experience of a common user, the claim settlement flow waits for the field inspection of a manual investigator, so that the waiting time of the user is long, and the experience is poor. And in the car insurance application scene, there is usually no car inspection link. Therefore, the situation of applying insurance with injury and the like can occur, and the insurance company has a larger risk of settling the claim.
In view of the above background, it is desirable to automatically identify vehicle conditions reflected in scenes such as vehicle insurance applications or claims settlement from live images (pictures or videos) taken by general users by using computer vision image identification technology in the field of artificial intelligence, which is assumed to apply artificial intelligence and machine learning to scenes for vehicle damage detection. Therefore, labor cost can be greatly reduced, and user experience is improved. However, in the process of taking live images by a common user, if the shooting process or the shot images have an irregular problem, the insurance risk of an insurance business party is greatly increased.
Disclosure of Invention
The method and apparatus for model training using data from multiple data parties described in one or more embodiments of the present disclosure may be used to solve one or more of the problems mentioned in the background section.
According to a first aspect, a method of assisting a user in capturing a video of a vehicle is provided, wherein the method comprises: acquiring a current frame in a vehicle video shot by a user and current shooting state information of a shooting terminal for shooting the vehicle video; processing the current frame by using a pre-trained image classification model so as to obtain the image quality characteristics of the current frame, and extracting shooting characteristics from the shooting state information; inputting at least the image quality characteristic into a first detection model trained in advance to detect the effectiveness of the current frame; under the condition that the current frame is detected to be a valid frame, identifying a vehicle component in the current frame by using a pre-trained component identification model so as to determine the component characteristic of the current frame based on the identification result; processing the shooting characteristics and the component characteristics by using a second detection model trained in advance to detect whether the current frame meets preset shooting rules, so as to determine a video shooting guide strategy for the current frame based on the detection result.
In one embodiment, the current frame is an image frame extracted from the vehicle video at predetermined time intervals.
In one embodiment, the photographing state information includes one or more of: the acceleration magnitude, the acceleration direction information, the placing direction information and the position information of the shooting terminal.
In one embodiment, the image quality feature comprises at least one of: the image clearness characteristic, the vehicle characteristic of the image, the light characteristic, whether the vehicle body cleanability characteristic image in the image is clear, whether the image is a vehicle image, whether the light is sufficient and whether the vehicle body is stained.
In one embodiment, inputting at least the image quality feature into a first detection model trained in advance to detect validity of the current frame comprises: after the image quality characteristic and the shooting characteristic are spliced, inputting the spliced image quality characteristic and the shooting characteristic into a first detection model trained in advance so as to detect the effectiveness of the current frame; wherein the validity of the current frame comprises one or more of: whether the image is clear, whether the image is a vehicle image, whether the light is sufficient, whether the vehicle body is stained, whether the image is shot upward or shot downward.
In one embodiment, the method further comprises: in the event that the current frame is detected not to be a valid frame, providing an image capture guidance policy for the current frame, the image capture guidance policy comprising at least one of: shooting aiming at the vehicle, shooting when the light is sufficient, shooting after cleaning stains, and keeping the shooting terminal along the vertical direction.
In one embodiment, the photographing rule includes an image composition rule indicating that a predetermined component falls in a predetermined region in an image, and the video photographing guide policy includes adjusting a distance between the photographing terminal and a vehicle body.
In one embodiment, the photographing rule includes a moving direction rule for detecting whether a moving direction of the photographing terminal is in a predetermined direction, and the video photographing guide policy includes moving in a direction opposite to a current moving direction or returning to an origin photographing.
In one embodiment, the photographing rule includes a photographing angle rule for detecting whether the current frame crosses a predetermined photographing angle.
In one embodiment, the second detection model is a long-short term memory model.
According to a second aspect, there is provided an apparatus for assisting a user in capturing a video of a vehicle, wherein the apparatus comprises:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is configured to acquire a current frame in a vehicle video shot by a user and current shooting state information of a shooting terminal used for shooting the vehicle video;
a first feature extraction unit configured to process the current frame using a pre-trained image classification model, thereby obtaining an image quality feature of the current frame, and extracting a photographing feature from the photographing state information;
the validity detection unit is configured to input at least the image quality characteristic into a first detection model trained in advance so as to detect the validity of the current frame;
a second feature extraction unit configured to, in a case where it is detected that the current frame is a valid frame, recognize a vehicle component in the current frame using a component recognition model trained in advance to determine a component feature of the current frame based on a recognition result;
and the video shooting guide unit is configured to process the shooting characteristics and the part characteristics by using a second detection model trained in advance to detect whether the current frame meets a preset shooting rule, so that a video shooting guide strategy for the current frame is determined based on the detection result.
According to a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect described above.
According to a fourth aspect, there is provided a computing device comprising a memory and a processor, wherein the memory has stored therein executable code, and wherein the processor, when executing the executable code, implements the method of the first aspect.
The embodiment of the specification provides a method and a device for assisting a user in shooting a vehicle video, which make full use of the characteristics of an image and shooting state information of a shooting terminal, on one hand, the effectiveness of the image is detected so as to prevent the vehicle video shot by the user from containing invalid frames to influence a vehicle checking effect, and on the other hand, whether each frame meets a shooting rule is detected, so that the condition that the shot vehicle video cannot comprehensively show the vehicle state is avoided. Thus, the effectiveness of the vehicle video shot by the user can be improved by providing shooting guide for the user in two aspects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating an implementation scenario of an embodiment of the present description;
FIG. 2 illustrates a flow diagram that assists a user in capturing a vehicle video according to one embodiment;
FIG. 3 is a diagram illustrating a specific example of determining validity of a current frame;
fig. 4 is a diagram showing a shooting direction, a shooting angle, and the like in a shooting rule of a specific example;
fig. 5 shows a diagram for providing a video capture guidance strategy according to a specific example;
FIG. 6 shows a schematic block diagram of an apparatus to assist a user in capturing vehicle video, according to one embodiment.
Detailed Description
The scheme provided by the specification is described below with reference to the accompanying drawings.
First, a description will be given of an embodiment of the present invention with reference to fig. 1. As shown in fig. 1, in this implementation scenario, fig. 1 shows a vehicle inspection scenario, which may be any scenario in which a vehicle damage condition needs to be checked. For example, the vehicle damage condition is determined when the vehicle is in insurance or when the vehicle is in insurance claims.
In this implementation scenario, a user may capture a live video of a vehicle through a mobile terminal (hereinafter referred to as a capture terminal) capable of capturing video information, such as a smart phone, a camera, and the like. In the process of collecting the on-site video of the vehicle by the user, the processing platform can judge the effectiveness of the currently collected image frame in real time and provide auxiliary guidance for shooting of the user. The processing platform may be a mobile terminal integrated in a user for acquiring video information, or may be a server remotely providing services for vehicle inspection applications of the mobile terminal, which is not limited herein.
Specifically, the processing platform may obtain a current frame photographed by the user and current state information of the photographing terminal. It is understood that the photographing features, such as photographing angle, moving acceleration, etc., can be extracted from the current state information. On one hand, for the current frame, the quality of the current frame can be judged by the first detection model trained in advance, for example, the definition judgment, the illumination judgment and the like. When the quality of the current frame is judged to be invalid, the user can be prompted to adjust shooting. On the other hand, when the current frame is judged to be the valid frame, the part features of the current frame can be further extracted, whether the current frame meets the scene-related shooting rules or not is detected by combining the shooting state features and the part features, and a video shooting guide strategy is determined for the user based on the detection result. Among the scene-related predetermined conditions are, for example, whether a predetermined part falls in a predetermined area in the image composition, whether the photographing angle changing direction is a predetermined direction, and the like.
The specific process of assisting the user in capturing the vehicle video is described in detail below.
FIG. 2 illustrates a flow diagram of a method of assisting a user in capturing a video of a vehicle, according to one embodiment. The execution subject of the method can be any system, device, apparatus, platform or server with computing and processing capabilities. Such as the processing platform shown in fig. 1.
As shown in fig. 2, the method for assisting a user in capturing a vehicle video includes the following steps: step 201, acquiring a current frame in a vehicle video shot by a user and current shooting state information of a shooting terminal for shooting the vehicle video; step 202, processing the current frame by using a pre-trained image classification model so as to obtain the image quality characteristics of the current frame, and extracting shooting characteristics from shooting state information; step 203, inputting at least image quality characteristics into a first detection model trained in advance to detect the effectiveness of the current frame; step 204, under the condition that the current frame is detected to be an effective frame, identifying the vehicle component in the current frame from the current frame by using a pre-trained component identification model so as to determine the component characteristic of the current frame based on the identification result; and step 205, processing the shooting state characteristics and the component characteristics by using a second detection model trained in advance to detect whether the current frame meets a preset shooting rule, so as to determine a video shooting guide strategy for the user based on the detection result.
First, in step 201, a current frame in a vehicle video captured by a user and current capturing state information of a capturing terminal for capturing the vehicle video are acquired.
When the user shoots the vehicle video, a certain shooting frame rate can be provided according to the performance of the device, for example, 60 frames/second, namely, 60 frames are shot every second. Since the flow of executing the embodiment of the present specification is synchronized with the shooting process, the real-time performance of the frame processing is required to be high. However, there may be a large number of overlapping regions in close frames, and therefore, in the present flow, it is not required to process every frame in the captured video. In step 201, the obtained current frame may be a latest frame currently shot, or a frame that is determined according to the processing performance of the device and can be substantially synchronized with the currently acquired frame. For example, the device shooting frame rate is 60 frames/second, and 15 frames per second can be processed according to the processing performance of the device, then, one frame (for example, the last frame of every 4 frames) can be extracted from every 4 frames according to the predetermined frame interval, and the flow of the present embodiment is executed. In this case, when the 9 th frame is captured, the current frame acquired may be the 8 th frame, and the real-time requirement may be basically satisfied. Under the condition that the processing performance of the device is good and each frame is processed and still can meet the real-time requirement, each frame can be acquired in sequence to be used as the current frame to execute the flow of the embodiment.
On the other hand, the shooting status of the shooting terminal for shooting the vehicle video is also important information for assisting the user in shooting the vehicle video, and therefore, the current shooting status information of the shooting terminal can also be acquired in step 201. The photographing terminal may be, for example, a terminal such as a smartphone or a camera. The current photographing state information may be used to describe a state in which the photographing terminal is currently photographing, for example, an acceleration magnitude, an acceleration direction, a placement direction, position information, and the like. The acceleration magnitude and the acceleration direction can be obtained through an acceleration sensor arranged on the shooting terminal, the placing direction can be obtained through a gyroscope, and the position information can be obtained through a positioning module (such as a Beidou satellite, a GPS and the like) realized through software or hardware.
It is noted that the direction of placement may include vertical, horizontal (lateral placement), tilted at an angle (pitch ), and the like. The placing direction is related to the picture direction of the photographed image frame, for example, when the photographing terminal is a smart phone, since the car body is long, the smart phone may be placed horizontally for photographing to achieve a better picture effect.
The location information may be absolute location information, such as location information represented by latitude and longitude. If the absolute position error is large, the relative position of the shooting terminal can be determined based on data acquired by a gyroscope and an acceleration sensor. The relative position may be a position relative to a reference position point. The reference position point is, for example, a shooting start point, i.e., a position point at which the shooting apparatus was located when the first frame of the vehicle video was shot.
It is understood that the photographing state change frequency of the photographing terminal is generally lower than the image capturing frequency, and thus the photographing state information is not necessarily captured every frame is captured. Alternatively, the shooting status information may be acquired at certain acquisition intervals, for example, once every 0.5 seconds.
Next, in step 202, the current frame is processed by using the pre-trained image classification model, so as to obtain the image quality feature of the current frame, and the shooting feature is extracted from the shooting status information. Wherein the image quality feature is a feature related to image quality. Here, the image quality feature may include, for example, but is not limited to, at least one of the following: image clarity characteristics, vehicle characteristics of the image, light characteristics, body cleanliness characteristics in the image, and the like.
The image classification model may be a multi-tasking classification model, such as a model implemented by an algorithm such as MobileNet V2, ShuffleNet. The image classification model may be trained over multiple pictures labeled with quality categories. A picture may correspond to multiple tags. For example, a picture corresponds to a "clear picture", "vehicle picture", "no dirt", "light is sufficient", and so on. When the image classification model is trained, pictures serving as training samples can be sequentially input into the selected model, and model parameters are adjusted based on comparison between output results of the model and corresponding labels. The above sample labels may also be represented by numbers, for example, a clear picture corresponding to 1 and a blurred picture corresponding to 0.
The image classification model may include a plurality of output channels, the current frame is input into the trained image classification model, and the image classification model may output the probability of the current frame on each feature classification through each channel, or a specific feature classification result. As an example, after the current frame is input into the image classification model, the output results of the image classification model on 4 channels are respectively 0.1, 0.8, 0.82 and 0.9, which indicate that the picture blur probability is 0.1 (image clarity characteristic), the probability of being a vehicle picture is 0.8 (vehicle characteristic of the image), the probability of the vehicle body being stained is 0.82 (vehicle body cleanliness characteristic in the image), and the light sufficiency probability is 0.9 (light characteristic). If the truncation probability is determined in advance, the output results of the image classification model on the 4 channels can be 0, 1, 1, 1 and 4 elements which respectively correspond to the image definition characteristic, the image vehicle cleaning characteristic and the light characteristic, and show that the image is clear, the image is a vehicle image, the vehicle body is stained and the light is sufficient.
In this way, features relating to the quality of the image can be obtained. When algorithms such as MobileNet V2 and ShuffleNet are used, the traditional two-dimensional convolution operation can be optimized, model parameters are well reduced, the operation efficiency is accelerated, the arrangement at a mobile terminal is convenient, and the description is omitted.
On the other hand, the shooting characteristics may also be extracted from the shooting status information. As the name implies, the photographing feature may be a feature related to a photographing state of the photographing terminal. For example, by taking the orientation of the terminal, it is possible to extract the characteristics of the presence or absence of the state of upward shooting, downward shooting, or the like. If the angle between the shooting angle of the shooting terminal and the vertical upward direction is larger than a preset threshold (such as 25 degrees), the shooting characteristic is determined to be upward shooting, and the like. For another example, the moving direction of the photographing terminal may be determined by its acceleration direction and speed.
Further, in step 203, at least the image quality features are input into a first detection model trained in advance to detect the validity of the current frame. The validity detection of the current frame is, for example: whether the image is clear, whether the image is a vehicle image, whether the light is sufficient, whether the vehicle body is stained, and the like. It is understood that the current frame is valid, indicating that the current frame is a frame that can be used to determine the vehicle state, such as a clear image, a picture of a vehicle, etc. The first detection model may be seen as a classification model, which may for example be implemented as a model such as a GBDT (gradient boosting decision tree).
In some alternative implementations, the output of the first detection model may correspond to a valid category and an invalid category. When the output result of the first detection model corresponds to the valid category, at least the predetermined feature items are directed to the direction that can be used for judging the vehicle state, for example, whether the feature of the image is clear corresponds to a clear category, or the probability of being a clear picture exceeds a predetermined value (e.g. 0.7), and so on. In general, the current frame may be determined to be not a valid frame (may also be referred to as an invalid frame) in at least one of insufficient light, an unclear image, no vehicle photograph, a stained body, and the like.
In other alternative implementations, the detection result of the first detection model may correspond to a plurality of categories, and the plurality of categories may be specific to various situations of the current frame, such as insufficient light or unclear image when the current frame is not a valid frame. The first detection model can respectively correspond to various situations through output results of a plurality of channels, and can also represent the current situation of the current frame through the output result of one channel. When the first detection model represents the current situation of the current frame through the output result of one channel, each image quality feature can also have a priority, the priority is represented by the corresponding weight of the response feature in the model, and the higher the weight is, the higher the priority is. For example, the priority order is whether the light is sufficient, whether the image is clear, whether the image is a photograph of a vehicle, whether the vehicle body is stained, and the like. This priority order also indicates the importance between the various factors of image quality. For example, only when sufficient light is detected and the image is clear, it can be determined whether the image is a vehicle picture. And the output result of the output channel of the detection model corresponds to the invalid situation with the highest priority level or the valid frame situation.
In one embodiment, if the current frame is not a valid frame, the output result of the first detection model may also correspond to an image capturing guidance policy provided to the user according to the actual situation. The image shooting guiding strategy is used for guiding the image shooting behaviors of the user, and can comprise 'please shoot at a vehicle' (corresponding to a non-vehicle image), 'please shoot in a bright environment' (corresponding to insufficient light), 'please scrub body stains' (corresponding to stains on a vehicle body), and the like. For example, if it is detected whether the vehicle image feature corresponds to a non-vehicle, the output result of the first detection model may correspond to an image capturing guidance policy of "please capture a vehicle".
In this case, the training sample of the first detection model may correspond to a plurality of pictures, each picture including the image quality characteristics determined by the image classification model and the pre-labeled classification labels, such as "please shoot at the vehicle", "please shoot in a bright environment", and so on. And respectively inputting the image quality characteristics of each picture into the selected classification model, and adjusting model parameters according to the comparison between the output result of the classification model and the classification labels, thereby training the image classification model, which is not repeated herein.
According to a possible design, the image quality characteristic and the shooting characteristic can be spliced and input into the first detection model, and the effectiveness of the current frame is detected by combining the shooting characteristic. It is understood that some objects in the image may be deformed for the image frames taken at a specific angle, for example, when the image frames are taken at a lower angle, the objects are pulled high, and the objects which are originally circular may become elliptical, which may affect the accuracy of the result in the operations of component recognition, damage recognition, and the like. Therefore, the photographing state of the photographing terminal may also be a factor of whether the current frame can determine the vehicle state. For example, in one embodiment, if the photographing feature of the photographing terminal includes a face-up or a face-down, it may be determined that the current frame is not a valid frame. At this time, according to the different embodiments described above, the output result of the first detection model may correspond to an image capturing guidance policy (e.g., "please shoot vertically") corresponding to the invalidation category, the overhead shooting invalidation category, and so on.
The image guide strategy can be displayed to the user in the forms of voice or characters and the like, so that if the current frame is not an effective frame, image shooting guide can be given to the user in time to collect the effective frame.
As can be seen from the above description, steps 202 and 203 are steps for detecting the validity of the current frame. To more clearly describe the above process, please refer to fig. 3. Fig. 3 is a diagram illustrating a specific example of determining validity of a current frame. In fig. 3, after the photographing terminal starts to photograph the vehicle video, photographing state information of the photographing terminal may be collected at intervals of a certain time (e.g., 0.5 second) and updated to current photographing state information. The current photographing state information may be acquired while the current frame is acquired. Then, on the one hand, through the multi-task classification model, the image quality feature of the current frame can be detected, which is represented by a feature vector, for example, (1, 0, 0, 1), and the value on each element corresponds to the detection result of the corresponding quality detection item. On the other hand, shooting characteristics such as acceleration characteristics, gyro angle characteristics, and the like, as corresponding to (0.1, 30 °) are extracted from the current shooting status information. Then, the image quality feature and the shooting feature are spliced to obtain (1, 0, 0, 1, 0.1, 30 °), a pre-trained GBDT model (first detection model) is input, and the current frame is determined to be a valid frame according to the output result of the GBDT model, or corresponding prompt information in an invalid situation, such as "please shoot in a bright environment" in a night (insufficient light), and "please shoot vertically" in an overhead/overhead shooting situation, and the like.
In the case that the output result of the first detection model corresponds to the current frame being a valid frame, i.e., a frame that can be used to determine the vehicle state, the vehicle component in the current frame is identified using the pre-trained component identification model, via step 204, to determine the component feature of the current frame based on the identification result. Wherein the component recognition model may be used to recognize the vehicle component in the image.
The part recognition model can be trained by using a plurality of pictures as training samples. The multiple pictures are respectively marked with part outlines and/or part names. When training, the original picture can be input into the selected model, and the model parameters are adjusted according to the labeling result, so that the part recognition model is trained. The component recognition model is implemented by, for example, a Convolutional Neural Network (CNN) or the like.
According to the description in step 203, when the current frame is an effective frame, the current frame may have features such as clear image, sufficient light, vertical shooting, etc., and the current frame is input to the component recognition model, where the output result of the component recognition model may be a picture marked with a component outline or a text description with component features, such as a text description indicating names and position relationships of each component appearing in the current frame. The output result of the component recognition model may be a feature of the component of the currently photographed vehicle.
Then, the shooting characteristics and the part characteristics are processed by using a second detection model trained in advance to detect whether the current frame satisfies a preset shooting rule, thereby determining a video shooting guidance strategy for the current frame based on the detection result, via step 205. The shooting rule may be a rule for determining that the relevant frame can completely show the vehicle state. According to actual requirements, the method can correspond to different shooting rules.
In one embodiment, the photographing rule may include an image composition rule. The vehicle image composition rule may indicate that a predetermined portion of the vehicle falls within a predetermined region of the image composition. For example, a predetermined region is indicated by a frame of a predetermined color, and when a current frame is photographed, if a predetermined portion falls in the frame, an image composition rule is satisfied. As an example, a green frame may be given at the top and bottom of the image, and among the vehicle contours of each frame during shooting, the chassis contour needs to fall into the bottom frame, and at the same time, the top contour needs to fall into the top frame, otherwise, the current frame does not satisfy the image composition rule. Wherein the floor profile and the top profile may be determined based on the component characteristics. By means of the image composition rule, the distance between a user and a vehicle body can be balanced, and the situations that the user is far and near suddenly in the shooting process, the vehicle image in a video frame is small and large and the like are avoided.
In another embodiment, the photographing rule may further include a photographing angle rule. When the images of the respective predetermined angles are acquired, the vehicle state can be completely exhibited from the respective orientations. For example, the cycle is divided into 12 angles, and when the user performs image acquisition at all the 12 angles, the user can be considered to acquire a complete vehicle video. Wherein, the shooting angle can be determined according to the shooting characteristics. The shooting angle can be determined, for example, by the offset of the current shooting point with respect to the reference point, distance, and angle. Alternatively, the reference point may be a start shooting position point.
For ease of description, please refer to the example shown in fig. 4. In fig. 4, it is assumed that the initial view angle is a position directly facing the front of the vehicle, and the view angle of the front license plate can be captured, when the capturing is started, whether the current frame is valid or not can be determined according to the capturing characteristics (such as the current position point, the image composition, and the like) and the component characteristics of the current frame, and the capturing rule (such as whether the license plate appears in the middle position of the left-right direction image or not) is satisfied, and if the current frame is satisfied, the current frame is an available. After the initial frame is determined, a coordinate system can be established by taking the shooting position of the initial frame as an origin and the longitudinal direction and the transverse direction of the vehicle body as coordinate axes.
In an alternative implementation, the second detection model may comprise a calculation method to determine the relative angle. When the shooting terminal moves, the angle of the shooting device relative to a starting point or the angle of a current shooting visual angle relative to the starting visual angle and rotated along a central axis (such as a central axis of a vehicle) relative to the movement of the shooting device can be determined by utilizing a calculation method of the relative angle through the magnitude and the direction of acceleration acquired by an acceleration sensor, a gyroscope and the like which are arranged in the shooting device. The difference between the angle and a preset angle is detected, and when the difference is within a preset difference range, the angle can be regarded as the preset angle. When each preset angle can be detected around the vehicle body, the shooting meets the shooting angle rule. On the contrary, if the frame meeting a certain shooting angle rule is not detected in the gap range of a preset angle, and the current frame skips the preset angle according to the moving direction of the shooting terminal, the current frame does not meet the shooting angle rule.
Through the shooting angle rule, the completeness of the shooting angle of the user can be controlled, so that a comprehensive vehicle video can be acquired.
In yet another embodiment, the photographing rule may further include a moving direction rule. It can be understood that, during the video shooting process, it is better for shooting and vehicle status determination to move the shooting in the same direction, such as the black line frame around the vehicle as shown in fig. 4, and the arrow direction indicates the moving direction of the shooting terminal. Optionally, a determination algorithm of the movement direction may be included in the second detection model. The moving direction determining algorithm compares the shooting characteristics of the current frame with the shooting characteristics of the previous frame, so that the current moving direction of the shooting terminal can be determined. As an example, the direction of change of the shooting position of the current frame with respect to the shooting position of the previous frame is the moving direction of the shooting terminal.
In more embodiments, there may be more shooting rules, which are not described herein.
As is apparent from the above description, the photographing rule is generally related not only to the current frame but also to the state of the previous frame, and therefore, in addition to the above-described specific calculation method, the second detection model may be implemented by a machine learning method, for example, a shallow DNN model, an LSTM model, or the like. Taking LSTM as an example, it is suitable for handling timing problems, and can have memory capability and integration capability for long-time information. In the LSTM model training, a plurality of videos can be used as training samples. Each frame can be extracted from each video according to a preset time interval, and each frame corresponds to an image characteristic, a shooting characteristic and a label of the marked shooting rule meeting the conditions. These labels are for example: meeting the shooting rules, moving in the opposite direction, crossing a predetermined shooting angle, and the like. And (3) sequentially inputting the image characteristics and the shooting characteristics of each frame into the LSTM model according to the time sequence aiming at each training sample, and adjusting the model parameters according to the labeling result of the current frame so as to train the LSTM model. Alternatively, the second detection model may further include a plurality of modules, each of which detects one of the photographing rules, for example, one module for detecting a photographing angle rule, one module for detecting an image composition rule, and the like.
The output result of the second detection model can correspond to the condition that the shooting rule is satisfied, and can also correspond to the corresponding video shooting guide strategy. The video capture guidance strategy may be, for example: when the moving direction is opposite, prompting the user to walk in the opposite direction; when the preset shooting angle is crossed, prompting the user to return to the preset angle range for shooting; and when the image composition rule is not met, prompting the user to adjust the distance between the shooting terminal and the vehicle body, and the like. Alternatively, when the moving direction is not correct, the video shooting guidance strategy may also be return to the origin shooting.
When the video shooting strategy is provided for the user, various modes can be used such as voice, characters, images and the like. For example, if the current frame does not satisfy the preset image composition rule, the frame with the predetermined color may change the color (e.g., a green frame is changed into a red frame), and meanwhile, the problem of the video shooting may be prompted by vibration of the mobile terminal, playing of prompting music, and the like. When the current frame does not satisfy the preset moving direction rule or the like, the correct moving direction may also be indicated by a predetermined figure such as an arrow, or the like.
For a more intuitive description of the implementation of steps 204 and 205, please refer to FIG. 5. Fig. 5 is a diagram illustrating a specific example of detecting whether the current frame satisfies a preset photographing rule through the second detection model. In the specific example shown in fig. 5, when it is detected that the current frame is a valid frame, the part feature is obtained by performing part recognition on the current frame. Then, the part features and the shooting features are spliced, and the moving direction of the shooting terminal and the image composition of the current frame are detected by using two LSTM modules in a second detection model respectively. When the direction of movement does not coincide with the direction of the prompt (as opposed to the direction of the arrow shown in fig. 4), a prompt may be given to the user to return to the home position on the front of the vehicle for retaking. On the other hand, when the photographing composition is not reasonable, for example, the roof of the vehicle does not fall into a set green frame, the user may be prompted to change the photographing mode by changing the color of the frame to red, shaking the photographing terminal, voice prompt, or the like.
Thus, if the current frame is invalid, it can be determined in time in step 203 that the current frame is invalid and then provide image shooting guidance for the user, and if the current frame is valid, video shooting guidance is provided for the user in step 205, thereby avoiding subsequent problems caused by irregular shooting operation of the user. It is understood that if the user operates the specification, the video shooting guidance strategy is not required to be provided or is continued to be shot until shooting is completed in step 205. Alternatively, a photographing guide policy may be always presented at the photographing terminal to guide the user's correct photographing regardless of whether the user photographing is normative.
Referring back to the above process, the method provided by the embodiments of the present specification for assisting the user in capturing the vehicle video can, on one hand, detect the effectiveness of a single frame in the captured video as an image in real time. And if the single frame is a valid frame, further judging whether the current frame is used as a frame in the car inspection video and accords with a car inspection video shooting rule. And under the condition that the current frame is invalid or does not accord with the car-inspection video shooting rule, a video shooting guide strategy can be provided for the user in time. So, can make the common user can correctly shoot effectual car inspection video, improve user experience to and the car inspection efficiency when claiming an indemnity. When the embodiment of the specification is used for the vehicle inspection process during vehicle insurance putting, the claim settlement risk of an insurance business party can be reduced.
According to an embodiment of another aspect, an apparatus for assisting a user in capturing a video of a vehicle is also provided. FIG. 6 shows a schematic block diagram of an apparatus to assist a user in capturing vehicle video, according to one embodiment. The device can be arranged on equipment, a platform, a terminal and a server with certain computing power, such as a smart phone shown in fig. 1. As shown in fig. 6, the apparatus 600 for assisting a user in capturing a video of a vehicle includes: an acquisition unit 61 configured to acquire a current frame in a vehicle video captured by a user and current capturing state information of a capturing terminal used to capture the vehicle video; a first feature extraction unit 62 configured to process the current frame using a pre-trained image classification model, thereby obtaining image quality features of the current frame, and extract shooting features from shooting state information; an effectiveness detection unit 63 configured to input at least image quality features into a first detection model trained in advance to detect the effectiveness of the current frame; a second feature extraction unit 64 configured to, in a case where it is detected that the current frame is a valid frame, recognize the vehicle component in the current frame using a component recognition model trained in advance to determine the component feature of the current frame based on the recognition result; and a video photographing guiding unit 65 configured to process the through photographing features and the part features using a second detection model trained in advance to detect whether the current frame satisfies a photographing rule set in advance, thereby determining a video photographing guiding policy for the current frame based on the detection result.
According to one embodiment, the current frame is an image frame extracted at predetermined time intervals from the vehicle video.
According to one embodiment, the photographing state information includes one or more of: the shooting terminal comprises an acceleration sensor, a gyroscope, a positioning module and a positioning module, wherein the acceleration sensor is arranged in the shooting terminal, the acceleration sensor is used for acquiring acceleration magnitude and acceleration direction information, the gyroscope is used for acquiring placement direction information, and the positioning module is used for acquiring position information.
In accordance with one embodiment of the present invention,
the image quality features include at least one of: image clarity characteristics, vehicle characteristics of the image, light characteristics, body cleanliness characteristics in the image.
According to one embodiment, the validity detection unit 63 is further configured to:
splicing the image quality characteristic and the shooting characteristic, and inputting the spliced image quality characteristic and the shooting characteristic into a pre-trained first detection model to detect the effectiveness of the current frame;
wherein the validity of the current frame comprises one or more of: whether the image is clear, whether the image is a vehicle image, whether the light is sufficient, whether the vehicle body is stained, whether the image is shot upward or shot downward.
According to one embodiment, the apparatus 600 further comprises:
an image capturing guidance unit (not shown) configured to provide an image capturing guidance policy for the current frame in a case where it is detected that the current frame is not a valid frame, the image capturing guidance policy including one of: shooting aiming at the vehicle, shooting when the light is sufficient, shooting after cleaning stains, and keeping the shooting terminal along the vertical direction.
According to one embodiment, the photographing rule includes an image composition rule indicating that the predetermined component falls within a predetermined area in the image, and the video photographing guide policy includes adjusting a distance of the photographing terminal from the vehicle body.
According to one embodiment, the photographing rule includes a moving direction rule for detecting whether a moving direction of the photographing terminal is in a predetermined direction, and the video photographing guide policy includes moving in a direction opposite to a current moving direction or returning to an origin photographing.
According to one embodiment, the photographing rule includes a photographing angle rule for detecting whether the current frame crosses a predetermined photographing angle.
It should be noted that, the above apparatus 600 for assisting the user in shooting the vehicle video shown in fig. 6 corresponds to the method embodiment shown in fig. 2, and the corresponding description in the method embodiment corresponding to fig. 2 is also applicable to the apparatus for assisting the user in shooting the vehicle video shown in fig. 6, and is not repeated herein.
According to an embodiment of another aspect, a computer-readable storage medium is also provided, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the respectively described method.
According to an embodiment of yet another aspect, there is also provided a computing device comprising a memory and a processor, the memory having stored therein executable code, the processor implementing the correspondingly described method when executing the executable code.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in the embodiments of this specification may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The above-mentioned embodiments are intended to explain the technical idea, technical solutions and advantages of the present specification in further detail, and it should be understood that the above-mentioned embodiments are merely specific embodiments of the technical idea of the present specification, and do not limit the scope of the technical idea of the present specification, and any modification, equivalent replacement, improvement, etc. made on the basis of the technical solution of the technical idea of the present specification should be included in the scope of the technical idea of the present specification.

Claims (20)

1. A method of assisting a user in capturing a video of a vehicle, wherein the method comprises:
acquiring a current frame in a vehicle video shot by a user and current shooting state information of a shooting terminal for shooting the vehicle video;
processing the current frame by utilizing a pre-trained image classification model so as to obtain the image quality characteristics of the current frame, and extracting shooting characteristics from the shooting state information;
inputting at least the image quality characteristic into a first detection model trained in advance to detect the effectiveness of the current frame;
under the condition that the current frame is detected to be a valid frame, identifying a vehicle component in the current frame by using a pre-trained component identification model so as to determine the component characteristic of the current frame based on the identification result;
processing the shooting characteristics and the part characteristics by using a pre-trained second detection model to detect whether the current frame meets a preset shooting rule or not so as to determine a video shooting guiding strategy for the current frame based on a detection result, wherein the second detection model is a long-short term memory model, and the second detection model sequentially processes the shooting characteristics and the part characteristics of other frames arranged in time sequence before processing the shooting characteristics and the part characteristics of the current frame.
2. The method of claim 1, wherein the current frame is an image frame extracted at predetermined time intervals from the vehicle video.
3. The method of claim 1, wherein the photography state information comprises one or more of: the acceleration magnitude, the acceleration direction information, the placing direction information and the position information of the shooting terminal.
4. The method of claim 1, wherein the image quality characteristic comprises at least one of: image clarity characteristics, vehicle characteristics of the image, light characteristics, body cleanliness characteristics in the image.
5. The method of claim 1, wherein inputting at least the image quality feature into a first detection model trained in advance to detect validity of the current frame comprises:
after the image quality characteristic and the shooting characteristic are spliced, inputting the spliced image quality characteristic and the shooting characteristic into a first detection model trained in advance so as to detect the effectiveness of the current frame;
wherein the validity of the current frame comprises one or more of: whether the image is clear, whether the image is a vehicle image, whether the light is sufficient, whether the vehicle body is stained, whether the image is shot upward or shot downward.
6. The method of claim 1, wherein the method further comprises:
in the event that the current frame is detected not to be a valid frame, providing an image capture guidance policy for the current frame, the image capture guidance policy comprising one of: shooting aiming at the vehicle, shooting when the light is sufficient, shooting after cleaning stains, and keeping the shooting terminal along the vertical direction.
7. The method of claim 1, wherein the capture rules include image composition rules indicating that predetermined components fall within a predetermined area in an image, and the video capture guidance strategy includes adjusting a distance of the capture terminal from a vehicle body.
8. The method of claim 1, wherein the photographing rule includes a movement direction rule for detecting whether a movement direction of the photographing terminal is in a predetermined direction, and the video photographing guidance policy includes moving in a direction opposite to a current movement direction or returning to an origin photographing.
9. The method of claim 1, wherein the photographing rule comprises a photographing angle rule for detecting whether the current frame spans a predetermined photographing angle.
10. An apparatus for assisting a user in capturing video of a vehicle, wherein the apparatus comprises:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is configured to acquire a current frame in a vehicle video shot by a user and current shooting state information of a shooting terminal used for shooting the vehicle video;
a first feature extraction unit configured to process the current frame using a pre-trained image classification model, thereby obtaining an image quality feature of the current frame, and extracting a photographing feature from the photographing state information;
the validity detection unit is configured to input at least the image quality characteristic into a first detection model trained in advance so as to detect the validity of the current frame;
a second feature extraction unit configured to, in a case where it is detected that the current frame is a valid frame, recognize a vehicle component in the current frame using a component recognition model trained in advance to determine a component feature of the current frame based on a recognition result;
and the video shooting guiding unit is configured to process the shooting characteristics and the part characteristics by using a pre-trained second detection model to detect whether the current frame meets a preset shooting rule or not so as to determine a video shooting guiding strategy for the current frame based on a detection result, wherein the second detection model is a long-short term memory model, and the second detection model is used for sequentially processing the shooting characteristics and the part characteristics of other frames arranged in time sequence before processing the shooting characteristics and the part characteristics of the current frame.
11. The apparatus of claim 10, wherein the current frame is an image frame extracted from the vehicle video at a predetermined time interval.
12. The apparatus of claim 10, wherein the photography status information comprises one or more of: the acceleration magnitude, the acceleration direction information, the placing direction information and the position information of the shooting terminal.
13. The apparatus of claim 10, wherein the image quality characteristic comprises at least one of: image clarity characteristics, vehicle characteristics of the image, light characteristics, body cleanliness characteristics in the image.
14. The apparatus of claim 10, wherein the validity detection unit is further configured to:
after the image quality characteristic and the shooting characteristic are spliced, inputting the spliced image quality characteristic and the shooting characteristic into a first detection model trained in advance so as to detect the effectiveness of the current frame;
wherein the validity of the current frame comprises one or more of: whether the image is clear, whether the image is a vehicle image, whether the light is sufficient, whether the vehicle body is stained, whether the image is shot upward or shot downward.
15. The apparatus of claim 10, wherein the apparatus further comprises:
an image capturing guidance unit configured to provide an image capturing guidance policy for the current frame in a case where it is detected that the current frame is not a valid frame, the image capturing guidance policy including one of: shooting aiming at the vehicle, shooting when the light is sufficient, shooting after cleaning stains, and keeping the shooting terminal along the vertical direction.
16. The apparatus of claim 10, wherein the photographing rule includes an image composition rule indicating that a predetermined component falls within a predetermined area in an image, and the video photographing guidance policy includes adjusting a distance of the photographing terminal from a vehicle body.
17. The apparatus of claim 10, wherein the photographing rule includes a movement direction rule for detecting whether a movement direction of the photographing terminal is in a predetermined direction, and the video photographing guidance policy includes moving in a direction opposite to a current movement direction or returning to an origin photographing.
18. The apparatus of claim 10, wherein the photographing rule comprises a photographing angle rule for detecting whether the current frame spans a predetermined photographing angle.
19. A computer-readable storage medium, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the method of any one of claims 1-9.
20. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that, when executed by the processor, performs the method of any of claims 1-9.
CN201911046418.5A 2019-10-30 2019-10-30 Method and device for assisting user in shooting vehicle video Active CN110650292B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201911046418.5A CN110650292B (en) 2019-10-30 2019-10-30 Method and device for assisting user in shooting vehicle video
CN202110313504.9A CN113038018B (en) 2019-10-30 2019-10-30 Method and device for assisting user in shooting vehicle video
PCT/CN2020/110735 WO2021082662A1 (en) 2019-10-30 2020-08-24 Method and apparatus for assisting user in shooting vehicle video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911046418.5A CN110650292B (en) 2019-10-30 2019-10-30 Method and device for assisting user in shooting vehicle video

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110313504.9A Division CN113038018B (en) 2019-10-30 2019-10-30 Method and device for assisting user in shooting vehicle video

Publications (2)

Publication Number Publication Date
CN110650292A CN110650292A (en) 2020-01-03
CN110650292B true CN110650292B (en) 2021-03-02

Family

ID=68995126

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110313504.9A Active CN113038018B (en) 2019-10-30 2019-10-30 Method and device for assisting user in shooting vehicle video
CN201911046418.5A Active CN110650292B (en) 2019-10-30 2019-10-30 Method and device for assisting user in shooting vehicle video

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110313504.9A Active CN113038018B (en) 2019-10-30 2019-10-30 Method and device for assisting user in shooting vehicle video

Country Status (2)

Country Link
CN (2) CN113038018B (en)
WO (1) WO2021082662A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113038018B (en) * 2019-10-30 2022-06-28 支付宝(杭州)信息技术有限公司 Method and device for assisting user in shooting vehicle video
CN111652087B (en) * 2020-05-15 2023-07-18 泰康保险集团股份有限公司 Car inspection method, device, electronic equipment and storage medium
CN112492105B (en) * 2020-11-26 2022-04-15 深源恒际科技有限公司 Video-based vehicle appearance part self-service damage assessment acquisition method and system
CN113408465B (en) * 2021-06-30 2022-08-26 平安国际智慧城市科技股份有限公司 Identity recognition method and device and related equipment
CN114241180A (en) * 2021-12-15 2022-03-25 平安科技(深圳)有限公司 Image detection method and device for vehicle damage claims, computer equipment and storage medium
WO2023178589A1 (en) * 2022-03-24 2023-09-28 深圳市大疆创新科技有限公司 Filming guiding method, electronic device, system and storage medium
CN115022671B (en) * 2022-06-02 2024-03-01 智道网联科技(北京)有限公司 Multi-process video output method, cloud terminal, electronic equipment and storage medium
CN114972298B (en) * 2022-06-16 2024-04-09 中国电建集团中南勘测设计研究院有限公司 Urban drainage pipeline video detection method and system
CN115107022B (en) * 2022-06-16 2024-04-19 华中科技大学 Industrial robot position error compensation method and system based on GBDT algorithm
CN115953726B (en) * 2023-03-14 2024-02-27 深圳中集智能科技有限公司 Machine vision container face damage detection method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268783A (en) * 2014-05-30 2015-01-07 翱特信息系统(中国)有限公司 Vehicle loss assessment method and device and terminal device
CN107194323A (en) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN107368776A (en) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN109145903A (en) * 2018-08-22 2019-01-04 阿里巴巴集团控股有限公司 A kind of image processing method and device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010032282A1 (en) * 2008-09-16 2010-03-25 パイオニア株式会社 Server device, mobile terminal, intersection guide system, and intersection guide method
US9654679B1 (en) * 2013-03-13 2017-05-16 Liberty Mutual Insurance Company Imagery quantification of damage
CN107079133A (en) * 2014-09-24 2017-08-18 松下知识产权经营株式会社 Vehicle-mounted electron mirror
WO2018057497A1 (en) * 2016-09-21 2018-03-29 Allstate Insurance Company Enhanced image capture and analysis of damaged tangible objects
CN113179368B (en) * 2018-05-08 2023-10-27 创新先进技术有限公司 Vehicle loss assessment data processing method and device, processing equipment and client
CN109325488A (en) * 2018-08-31 2019-02-12 阿里巴巴集团控股有限公司 For assisting the method, device and equipment of car damage identification image taking
CN110033386B (en) * 2019-03-07 2020-10-02 阿里巴巴集团控股有限公司 Vehicle accident identification method and device and electronic equipment
CN110245552B (en) * 2019-04-29 2023-07-18 创新先进技术有限公司 Interactive processing method, device, equipment and client for vehicle damage image shooting
CN113038018B (en) * 2019-10-30 2022-06-28 支付宝(杭州)信息技术有限公司 Method and device for assisting user in shooting vehicle video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268783A (en) * 2014-05-30 2015-01-07 翱特信息系统(中国)有限公司 Vehicle loss assessment method and device and terminal device
CN107194323A (en) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN107368776A (en) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN109145903A (en) * 2018-08-22 2019-01-04 阿里巴巴集团控股有限公司 A kind of image processing method and device

Also Published As

Publication number Publication date
WO2021082662A1 (en) 2021-05-06
CN113038018A (en) 2021-06-25
CN113038018B (en) 2022-06-28
CN110650292A (en) 2020-01-03

Similar Documents

Publication Publication Date Title
CN110650292B (en) Method and device for assisting user in shooting vehicle video
TWI709091B (en) Image processing method and device
CN110569695B (en) Image processing method and device based on loss assessment image judgment model
CN110705405B (en) Target labeling method and device
CN111862296B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction system, model training method and storage medium
CN108337505B (en) Information acquisition method and device
US20200175673A1 (en) Method and device for detecting defect of meal box, server, and storage medium
CN112396073A (en) Model training method and device based on binocular images and data processing equipment
CN110427810B (en) Video damage assessment method, device, shooting end and machine-readable storage medium
CN110458108B (en) Manual operation real-time monitoring method, system, terminal equipment and storage medium
CN110658918B (en) Positioning method, device and medium for eyeball tracking camera of video glasses
CN110837901A (en) Cloud test drive appointment auditing method and device, storage medium and cloud server
CN108289176B (en) Photographing question searching method, question searching device and terminal equipment
CN113963149A (en) Medical bill picture fuzzy judgment method, system, equipment and medium
CN112184544B (en) Image stitching method and device
CN112396654A (en) Method and device for determining pose of tracking object in image tracking process
CN109062220B (en) Method and device for controlling terminal movement
CN112802112B (en) Visual positioning method, device, server and storage medium
CN110660000A (en) Data prediction method, device, equipment and computer readable storage medium
CN115965934A (en) Parking space detection method and device
CN112101303B (en) Image data processing method and device and computer readable storage medium
CN113177967A (en) Object tracking method, system and storage medium for video data
CN112565586A (en) Automatic focusing method and device
CN111860051A (en) Vehicle-based loop detection method and device and vehicle-mounted terminal
CN111860050A (en) Loop detection method and device based on image frame and vehicle-mounted terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant