CN113190013B - Method and device for controlling movement of terminal - Google Patents

Method and device for controlling movement of terminal Download PDF

Info

Publication number
CN113190013B
CN113190013B CN202110528746.XA CN202110528746A CN113190013B CN 113190013 B CN113190013 B CN 113190013B CN 202110528746 A CN202110528746 A CN 202110528746A CN 113190013 B CN113190013 B CN 113190013B
Authority
CN
China
Prior art keywords
terminal
axis
preset
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110528746.XA
Other languages
Chinese (zh)
Other versions
CN113190013A (en
Inventor
程远
郭昕
蒋晨
褚崴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN202110528746.XA priority Critical patent/CN113190013B/en
Publication of CN113190013A publication Critical patent/CN113190013A/en
Application granted granted Critical
Publication of CN113190013B publication Critical patent/CN113190013B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0285Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the specification provides a method and a device for controlling movement of a terminal. The method comprises the following steps: firstly, at least one image acquired by the terminal is acquired according to a preset time interval, then, one preset motion direction of a plurality of preset motion directions is determined to be a target motion direction by using a neural network classification model according to the at least one image, and then the terminal is controlled to move by a preset unit according to the target motion direction, so that the terminal continuously acquires images along a motion track, wherein the images are used for vehicle damage assessment. From the above, the images acquired in the process of the terminal moving along the movement track meet the requirement of investigation and damage assessment, and the field photo and damage assessment photo do not need to be manually shot, so that the claim settlement period is shortened.

Description

Method and device for controlling movement of terminal
The invention is a divisional application of the invention application of which the application date is 2018, 08, 31, the application number is 201811014362.0 and the invention name is a method and a device for controlling the movement of a terminal.
Technical Field
One or more embodiments of the present disclosure relate to the field of computers, and more particularly, to a method and apparatus for controlling movement of a terminal.
Background
In a car insurance claim scene, in general, an insurance company needs to send professional investigation and damage assessment personnel to an accident site to conduct site investigation and damage assessment, give a maintenance scheme and compensation amount of a car, take site photos and damage assessment photos, and keep the photos in file for a background check personnel to check damage price.
Because of the need for manual investigation and damage assessment, insurance companies require significant human effort and training costs for expertise training of investigation and damage assessment personnel. From the experience of a common user, the claim settlement process is characterized in that the period of claim settlement is as long as 1-3 days due to the fact that a person waiting for a manual survey staff shoots on site, the damage assessment staff assesses damage at a maintenance site and the damage assessment staff assesses damage at a background, and the waiting time of the user is longer and the experience is worse.
It is therefore desirable to provide a solution for controlling the movement of a terminal which continuously acquires images along a movement trajectory, said images being used for vehicle damage assessment, in such a way that it is not necessary to manually take live and damage assessment photographs, thereby shortening the claims cycle.
Disclosure of Invention
One or more embodiments of the present specification describe a method and apparatus for controlling movement of a terminal without manually taking live photographs and damage-assessment photographs, thereby shortening claims settlement periods.
In a first aspect, a method for controlling movement of a terminal is provided, the method comprising:
acquiring at least one image acquired by the terminal according to a preset time interval;
determining one preset motion direction of a plurality of preset motion directions as a target motion direction by using a neural network classification model according to the at least one image;
and controlling the terminal to move by a preset unit according to the target movement direction so as to enable the terminal to continuously acquire images along a movement track, wherein the images are used for vehicle damage assessment.
In one possible embodiment, the method further comprises: and according to a plurality of images continuously collected in the process that the terminal moves along the movement track, the damage of the vehicle is estimated.
In one possible embodiment, the method further comprises:
acquiring position information and/or posture information of the terminal associated with the image according to the preset time interval, wherein the position information is position relation information of the terminal and the vehicle, and the posture information is shooting angle information of the terminal;
the determining, according to the at least one image, one preset motion direction of a plurality of preset motion directions as a target motion direction using a neural network classification model includes:
and determining one preset motion direction of a plurality of preset motion directions as a target motion direction by using a neural network classification model according to the at least one image and the position information and/or the gesture information of the terminal associated with the image.
In one possible embodiment, the method further comprises:
acquiring position information and/or posture information of the terminal associated with the image according to the preset time interval, wherein the position information is position relation information of the terminal and the vehicle, and the posture information is shooting angle information of the terminal;
and according to a plurality of images continuously collected in the process of the movement of the terminal along the movement track and the position information and/or the gesture information of the terminal associated with the images, the damage of the vehicle is determined.
In one possible implementation, the neural network classification model is pre-trained based on training samples that include a plurality of images in a vehicle impairment scene, each image having a label of the target motion direction that has been calibrated.
In one possible embodiment, the plurality of preset directions of motion include at least one of translation along an X-axis positive half-axis, translation along an X-axis negative half-axis, translation along a Y-axis positive half-axis, translation along a Y-axis negative half-axis, translation along a Z-axis positive half-axis, and translation along a Z-axis negative half-axis according to a first preset coordinate system;
the control of the terminal to move by a preset unit according to the target movement direction comprises the following steps:
and controlling the terminal to translate a preset distance according to the target movement direction.
Further, the first preset coordinate system is set according to the position of the vehicle.
In one possible embodiment, the plurality of preset directions of movement include at least one of clockwise rotation about an X-axis, counterclockwise rotation about an X-axis, clockwise rotation about a Y-axis, counterclockwise rotation about a Y-axis, clockwise rotation about a Z-axis, counterclockwise rotation about a Z-axis according to a second preset coordinate system;
the control of the terminal to move by a preset unit according to the target movement direction comprises the following steps:
and controlling the terminal to rotate by a preset angle according to the target movement direction.
Further, the second preset coordinate system is set according to the position of the terminal.
In one possible implementation manner, the controlling the terminal to move according to the target movement direction by a preset unit includes:
and controlling the terminal to move by a preset unit according to the target movement direction through an unmanned aerial vehicle, an automatic driving robot or a mechanical arm.
In a second aspect, there is provided an apparatus for controlling movement of a terminal, the apparatus comprising:
the acquisition unit is used for acquiring at least one image acquired by the terminal according to a preset time interval;
a determining unit, configured to determine, according to the at least one image acquired by the acquiring unit, one preset motion direction of a plurality of preset motion directions as a target motion direction using a neural network classification model;
the control unit is used for controlling the terminal to move by a preset unit according to the target movement direction determined by the determination unit so that the terminal continuously collects images along the movement track, and the images are used for vehicle damage assessment.
In a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
In a fourth aspect, there is provided a computing device comprising a memory having executable code stored therein and a processor which, when executing the executable code, implements the method of the first aspect.
According to the method and the device provided by the embodiment of the specification, at least one image acquired by the terminal is acquired at preset time intervals, one preset moving direction of a plurality of preset moving directions is determined to be the target moving direction according to the at least one image by using a neural network classification model, and the terminal is controlled to move by a preset unit according to the target moving direction, so that the terminal continuously acquires the image along a moving track, and the image is used for vehicle damage assessment. From the above, based on at least one image acquired by the terminal, it can be determined how to move the terminal from the current position to the next position, so as to determine the movement track of the terminal, and the image acquired in the process of moving the terminal along the movement track meets the requirement of investigation and damage assessment, and no manual shooting of live photos and damage assessment photos is required, so that the claim settlement period is shortened.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation scenario of an embodiment disclosed herein;
FIG. 2 illustrates a flow chart of a method of controlling movement of a terminal according to one embodiment;
FIG. 3 is a schematic view of a first predetermined coordinate system of one embodiment disclosed herein;
FIG. 4 is a schematic diagram of a second preset coordinate system of one embodiment disclosed herein;
fig. 5 shows a schematic block diagram of an apparatus for controlling movement of a terminal according to an embodiment.
Detailed Description
The following describes the scheme provided in the present specification with reference to the drawings.
Fig. 1 is a schematic diagram of an implementation scenario of an embodiment disclosed in the present specification. As shown in fig. 1, a vehicle 11 is a vehicle that needs to be surveyed and damaged, and an insurance company sends a device 12 with a camera to a vehicle accident site to collect images of the vehicle 11, wherein the device 12 with the camera may be, but is not limited to, an automatic device such as an unmanned aerial vehicle, an automatic driving robot or a manipulator. In the embodiment of the present disclosure, the action path of the camera-carrying device 12 may be planned by an automatic path planning algorithm to find the damaged portion and to take a photograph and/or video of the damaged portion. Optionally, motion track information of the device 12 carrying the camera during the motion along the above-mentioned motion path, such as camera pose information, camera-vehicle position relationship information, etc., and saving camera photos and video information, which can be used by the image algorithm to determine the damage to the vehicle, may also be acquired.
The scenario shown in fig. 1 mainly includes the following processing procedures: 1. path planning: the next direction of movement of the camera-carrying device 12 is planned based on the position information, the pose information and the image information collected by the camera-carrying device 12, for example, the path of movement of the camera-carrying device 12 from position a to position B is planned in fig. 1. 2. And (3) image acquisition: the camera of the camera-carrying device 12 continuously captures and records image information, it being understood that the camera may continuously capture video (i.e., video) and the camera may also capture photographs, such as one or more images at location a, and one or more images at location B. 3. And (3) image identification: vehicle impairment is performed using image recognition techniques in combination with path information (e.g., camera pose information, camera-to-vehicle positional relationship information, etc.) and image information. As an example, a given impairment algorithm may be aggregated from camera pose information, camera-to-vehicle positional relationship information, and image information to identify components and extent of impairment and determine a repair price.
In the embodiment of the present disclosure, the camera may be referred to as a terminal, that is, the camera may be a dedicated device dedicated to photographing or video recording, or may be a general-purpose device having a communication function or a processing function, for example, a mobile phone, a tablet, or the like, in addition to photographing or video recording.
It should be noted that, in the above-mentioned scene, only image information may be collected, and further, path information such as camera pose information, camera and vehicle position relationship information may also be collected, and accordingly, in the path planning or image recognition process, only image information may be used, or path information such as camera pose information, camera and vehicle position relationship information may also be combined according to image information, so that various feasible schemes may be formed, and these schemes are all within the scope of the schemes provided in the embodiments of the present specification.
Fig. 2 shows a flow chart of a method of controlling movement of a terminal according to one embodiment. The subject of the method may be the camera-carrying device 12 shown in fig. 1 (i.e., the terminal) or a control system additionally provided that communicates with the camera-carrying device 12 to control how the camera-carrying device 12 moves. As shown in fig. 2, the method for controlling the movement of the terminal in this embodiment includes the following steps: step 21, at least one image acquired by the terminal is acquired according to a preset time interval; step 22, determining one preset motion direction of a plurality of preset motion directions as a target motion direction by using a neural network classification model according to the at least one image; and step 23, controlling the terminal to move by a preset unit according to the target movement direction so as to enable the terminal to continuously acquire images along a movement track, wherein the images are used for vehicle damage assessment. Specific implementations of the above steps are described below.
First, at step 21, at least one image acquired by the terminal is acquired at predetermined time intervals. It can be understood that, in the process that the terminal moves along the movement track, the image is continuously collected, where the image collecting mode may be taking a photograph or recording a video. In one example, in the process that the terminal moves along the motion trail, video recording is continuously performed, and the at least one image is a video frame extracted from the video recording; in another example, in the process that the terminal moves along the movement track, a photo is taken every preset time interval, and the at least one image is at least one photo taken by the terminal.
In addition, it should be noted that, in the step 21, at least one image is used for path planning, and the number of at least one image may be one or more, and when the number of at least one image is more, the plurality of images may be a plurality of images acquired at the same location or a plurality of images acquired at different locations.
Optionally, the position information and/or the posture information of the terminal associated with the image may be acquired at the predetermined time interval, where the position information is position relation information of the terminal and the vehicle, and the posture information is shooting angle information of the terminal.
Next, at step 22, one of a plurality of preset motion directions is determined as a target motion direction using a neural network classification model based on the at least one image. Optionally, in step 21, position information and/or posture information of the terminal are also obtained, and accordingly, in step 22, one preset motion direction of the plurality of preset motion directions may be determined as the target motion direction by using a neural network classification model according to the at least one image and the position information and/or posture information of the terminal associated with the image.
It will be appreciated that the neural network classification model is pre-trained based on training samples that include a plurality of images in a vehicle impairment scene, each image having a label of the target motion direction that has been calibrated.
In one example, the plurality of preset directions of motion includes at least one of translating along an X-axis positive half-axis, translating along an X-axis negative half-axis, translating along a Y-axis positive half-axis, translating along a Y-axis negative half-axis, translating along a Z-axis positive half-axis, and translating along a Z-axis negative half-axis according to a first preset coordinate system.
The first preset coordinate system may be preset, for example, according to a position setting of the vehicle, and fig. 3 is a schematic diagram of the first preset coordinate system according to an embodiment disclosed in the present disclosure, and as shown in fig. 3, the first preset coordinate system is established centering on the vehicle position.
In one example, the plurality of preset directions of movement includes at least one of clockwise rotation about an X-axis, counterclockwise rotation about an X-axis, clockwise rotation about a Y-axis, counterclockwise rotation about a Y-axis, clockwise rotation about a Z-axis, counterclockwise rotation about a Z-axis according to a second preset coordinate system.
The second preset coordinate system may be changed according to a position of the terminal, for example, according to a position setting of the terminal, and fig. 4 is a schematic diagram of the second preset coordinate system according to an embodiment disclosed in the present disclosure, and as shown in fig. 4, the second preset coordinate system is built around a position of the camera (i.e., the terminal).
Finally, in step 23, the terminal is controlled to move by a preset unit according to the target movement direction, so that the terminal continuously collects images along the movement track, and the images are used for vehicle damage assessment. It can be understood that each time the terminal moves, a section of movement track of the terminal is formed, the terminal can move for a plurality of times, and the partial movement tracks of the terminal formed by each movement form the complete movement track of the terminal together.
In one example, the movement of the terminal may include translation and rotation. And when the target movement direction belongs to translation, controlling the terminal to translate a preset distance according to the target movement direction. And when the target movement direction belongs to rotation, controlling the terminal to rotate by a preset angle according to the target movement direction.
Optionally, after step 23, the damage may be determined on the vehicle according to a plurality of images continuously collected during the movement of the terminal along the movement track. It will be appreciated that the damage to the vehicle and the determination of the movement track of the terminal may be performed by the same device or may be performed by different devices, which is not limited in the embodiment of the present disclosure. Wherein, the damage can be estimated to the vehicle only according to the acquired image. Or, according to a plurality of images continuously collected in the process of the terminal moving along the movement track and the position information and/or the gesture information of the terminal associated with the images, the damage of the vehicle is determined.
In one example, assessing the vehicle may include determining one or more of a damaged component, a degree of damage, and a repair price.
In one example, one or more neural network classification models may be employed to assess vehicle damage, which is not described in detail herein.
In addition, the terminal may be controlled to move by a preset unit according to the target movement direction by an unmanned aerial vehicle, an automatic traveling robot, or a manipulator.
According to the method provided by the embodiment of the specification, at least one image acquired by the terminal is acquired at a preset time interval, one preset motion direction of a plurality of preset motion directions is determined to be a target motion direction according to the at least one image by using a neural network classification model, and the terminal is controlled to move by a preset unit according to the target motion direction, so that the terminal continuously acquires images along a motion track, wherein the images are used for vehicle damage assessment. From the above, based on at least one image acquired by the terminal, it can be determined how to move the terminal from the current position to the next position, so as to determine the movement track of the terminal, and the image acquired in the process of moving the terminal along the movement track meets the requirement of investigation and damage assessment, and no manual shooting of live photos and damage assessment photos is required, so that the claim settlement period is shortened.
According to an embodiment of another aspect, a device for controlling movement of a terminal is also provided. Fig. 5 shows a schematic block diagram of an apparatus for controlling movement of a terminal according to an embodiment. As shown in fig. 5, the apparatus 500 includes:
an acquiring unit 51, configured to acquire at least one image acquired by the terminal at predetermined time intervals;
a determining unit 52, configured to determine, according to the at least one image acquired by the acquiring unit 51, one preset motion direction of a plurality of preset motion directions as a target motion direction using a neural network classification model;
and a control unit 53, configured to control the terminal to move by a preset unit according to the target movement direction determined by the determining unit 52, so that the terminal continuously collects images along a movement track, where the images are used for vehicle damage assessment.
In one example, the apparatus further comprises:
and the damage assessment unit is used for assessing damage to the vehicle according to a plurality of continuously acquired images in the process of the terminal moving along the movement track determined by the control unit 53.
In one example, the obtaining unit 51 is further configured to obtain, at the predetermined time interval, location information and/or pose information of the terminal associated with the image, where the location information is location relationship information between the terminal and the vehicle, and the pose information is shooting angle information of the terminal;
the determining unit 52 is specifically configured to determine, according to at least one image acquired by the acquiring unit 51 and the position information and/or the posture information of the terminal associated with the image, one preset motion direction of a plurality of preset motion directions as a target motion direction using a neural network classification model.
In one example, the obtaining unit 51 is further configured to obtain, at the predetermined time interval, location information and/or pose information of the terminal associated with the image, where the location information is location relationship information between the terminal and the vehicle, and the pose information is shooting angle information of the terminal;
the apparatus further comprises:
and the damage assessment unit is used for assessing damage to the vehicle according to a plurality of images continuously collected in the process of the movement of the terminal along the movement track determined by the control unit 53 and the position information and/or the gesture information of the terminal associated with the images.
In one example, the neural network classification model is pre-trained based on training samples that include a plurality of images in a vehicle impairment scene, each image having a label of a target motion direction that has been calibrated.
In one example, the plurality of preset directions of motion includes at least one of translating along an X-axis positive half-axis, translating along an X-axis negative half-axis, translating along a Y-axis positive half-axis, translating along a Y-axis negative half-axis, translating along a Z-axis positive half-axis, and translating along a Z-axis negative half-axis according to a first preset coordinate system;
the control unit 53 is specifically configured to control the terminal to translate a preset distance according to the target movement direction determined by the determining unit 52.
Further, the first preset coordinate system is set according to the position of the vehicle.
In one example, the plurality of preset directions of motion includes at least one of clockwise rotation about an X-axis, counterclockwise rotation about an X-axis, clockwise rotation about a Y-axis, counterclockwise rotation about a Y-axis, clockwise rotation about a Z-axis, counterclockwise rotation about a Z-axis according to a second preset coordinate system;
the control unit 53 is specifically configured to control the terminal to rotate by a preset angle according to the target movement direction determined by the determining unit 52.
Further, the second preset coordinate system is set according to the position of the terminal.
In one example, the control unit 53 is specifically configured to control, by using an unmanned aerial vehicle, an automatic driving robot, or a manipulator, the terminal to move by a preset unit according to the target movement direction determined by the determining unit 52.
With the apparatus provided in the embodiment of the present disclosure, at least one image acquired by the terminal is acquired by the acquiring unit 51 at a predetermined time interval, then, one of a plurality of preset moving directions is determined as a target moving direction by the determining unit 52 according to the at least one image using a neural network classification model, and then, the terminal is controlled by the control unit 53 to move by a preset unit according to the target moving direction, so that the terminal continuously acquires images along a moving track, where the images are used for vehicle damage assessment. From the above, based on at least one image acquired by the terminal, it can be determined how to move the terminal from the current position to the next position, so as to determine the movement track of the terminal, and the image acquired in the process of moving the terminal along the movement track meets the requirement of investigation and damage assessment, and no manual shooting of live photos and damage assessment photos is required, so that the claim settlement period is shortened.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in fig. 2.
According to an embodiment of yet another aspect, there is also provided a computing device including a memory having executable code stored therein and a processor that, when executing the executable code, implements the method described in fig. 2.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present invention in further detail, and are not to be construed as limiting the scope of the invention, but are merely intended to cover any modifications, equivalents, improvements, etc. based on the teachings of the invention.

Claims (16)

1. A method of controlling movement of a terminal, the method comprising:
acquiring at least one image acquired by the terminal according to a preset time interval;
acquiring position information and/or posture information of the terminal associated with the image according to the preset time interval, wherein the position information is position relation information of the terminal and a vehicle, and the posture information is shooting angle information of the terminal;
determining one preset motion direction of a plurality of preset motion directions as a target motion direction by using a neural network classification model according to the at least one image and the position information and/or the gesture information of the terminal associated with the image;
and controlling the terminal to move by a preset unit according to the target movement direction so as to enable the terminal to continuously collect images along a movement track.
2. The method of claim 1, wherein the neural network classification model is pre-trained based on training samples comprising a plurality of images in a vehicle impairment scene, each image having a label of a target motion direction that has been calibrated.
3. The method of claim 1, wherein the plurality of preset directions of motion comprise at least one of translation along an X-axis positive half-axis, translation along an X-axis negative half-axis, translation along a Y-axis positive half-axis, translation along a Y-axis negative half-axis, translation along a Z-axis positive half-axis, and translation along a Z-axis negative half-axis according to a first preset coordinate system;
the control of the terminal to move by a preset unit according to the target movement direction comprises the following steps:
and controlling the terminal to translate a preset distance according to the target movement direction.
4. A method according to claim 3, wherein the first preset coordinate system is set in dependence on the position of the vehicle.
5. The method of claim 1, wherein the plurality of preset directions of motion comprise at least one of clockwise rotation about an X-axis, counterclockwise rotation about an X-axis, clockwise rotation about a Y-axis, counterclockwise rotation about a Y-axis, clockwise rotation about a Z-axis, counterclockwise rotation about a Z-axis according to a second preset coordinate system;
the control of the terminal to move by a preset unit according to the target movement direction comprises the following steps:
and controlling the terminal to rotate by a preset angle according to the target movement direction.
6. The method of claim 5, wherein the second preset coordinate system is set according to a location of the terminal.
7. The method of any one of claims 1 to 6, wherein the controlling the terminal to move by a preset unit in the target movement direction comprises:
and controlling the terminal to move by a preset unit according to the target movement direction through an unmanned aerial vehicle, an automatic driving robot or a mechanical arm.
8. An apparatus for controlling movement of a terminal, the apparatus comprising:
the acquisition unit is used for acquiring at least one image acquired by the terminal according to a preset time interval; the terminal is further used for acquiring position information and/or posture information of the terminal associated with the image according to the preset time interval, wherein the position information is position relation information of the terminal and a vehicle, and the posture information is shooting angle information of the terminal;
a determining unit, configured to determine, according to at least one image acquired by the acquiring unit and position information and/or posture information of the terminal associated with the image, one preset motion direction of a plurality of preset motion directions as a target motion direction using a neural network classification model;
and the control unit is used for controlling the terminal to move by a preset unit according to the target movement direction determined by the determination unit so as to enable the terminal to continuously acquire images along the movement track.
9. The apparatus of claim 8, wherein the neural network classification model is pre-trained based on training samples comprising a plurality of images in a vehicle impairment scene, each image having a label of a target motion direction that has been calibrated.
10. The apparatus of claim 8, wherein the plurality of preset directions of motion comprise at least one of translation along an X-axis positive half-axis, translation along an X-axis negative half-axis, translation along a Y-axis positive half-axis, translation along a Y-axis negative half-axis, translation along a Z-axis positive half-axis, and translation along a Z-axis negative half-axis according to a first preset coordinate system;
the control unit is specifically configured to control the terminal to translate a preset distance according to the target movement direction determined by the determining unit.
11. The apparatus of claim 10, wherein the first preset coordinate system is set according to a position of the vehicle.
12. The apparatus of claim 8, wherein the plurality of preset directions of motion comprise at least one of clockwise rotation about an X-axis, counterclockwise rotation about an X-axis, clockwise rotation about a Y-axis, counterclockwise rotation about a Y-axis, clockwise rotation about a Z-axis, counterclockwise rotation about a Z-axis according to a second preset coordinate system;
the control unit is specifically configured to control the terminal to rotate by a preset angle according to the target movement direction determined by the determining unit.
13. The apparatus of claim 12, wherein the second preset coordinate system is set according to a location of the terminal.
14. The device according to any one of claims 8 to 13, wherein the control unit is specifically configured to control the terminal to move by a preset unit according to the target movement direction determined by the determination unit by means of an unmanned aerial vehicle, an automatic travel robot or a manipulator.
15. A computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of claims 1-7.
16. A computing device comprising a memory having executable code stored therein and a processor, which when executing the executable code, implements the method of any of claims 1-7.
CN202110528746.XA 2018-08-31 2018-08-31 Method and device for controlling movement of terminal Active CN113190013B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110528746.XA CN113190013B (en) 2018-08-31 2018-08-31 Method and device for controlling movement of terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110528746.XA CN113190013B (en) 2018-08-31 2018-08-31 Method and device for controlling movement of terminal
CN201811014362.0A CN109062220B (en) 2018-08-31 2018-08-31 Method and device for controlling terminal movement

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811014362.0A Division CN109062220B (en) 2018-08-31 2018-08-31 Method and device for controlling terminal movement

Publications (2)

Publication Number Publication Date
CN113190013A CN113190013A (en) 2021-07-30
CN113190013B true CN113190013B (en) 2023-06-27

Family

ID=64759152

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110528746.XA Active CN113190013B (en) 2018-08-31 2018-08-31 Method and device for controlling movement of terminal
CN201811014362.0A Active CN109062220B (en) 2018-08-31 2018-08-31 Method and device for controlling terminal movement

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201811014362.0A Active CN109062220B (en) 2018-08-31 2018-08-31 Method and device for controlling terminal movement

Country Status (1)

Country Link
CN (2) CN113190013B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114005313B (en) * 2021-01-12 2024-01-16 深圳动魅科技有限公司 Motion interaction equipment and control method thereof
CN114241398A (en) * 2022-02-23 2022-03-25 深圳壹账通科技服务有限公司 Vehicle damage assessment method, device, equipment and storage medium based on artificial intelligence

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5521843A (en) * 1992-01-30 1996-05-28 Fujitsu Limited System for and method of recognizing and tracking target mark
CN101866547A (en) * 2009-04-15 2010-10-20 徐克林 Monitoring system for insurance survey
CN101969548A (en) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 Active video acquiring method and device based on binocular camera shooting
CN105652891A (en) * 2016-03-02 2016-06-08 中山大学 Unmanned gyroplane moving target autonomous tracking device and control method thereof
CN106709699A (en) * 2016-12-22 2017-05-24 安徽保腾网络科技有限公司 Loss assessment method for insured vehicle
CN107168324A (en) * 2017-06-08 2017-09-15 中国矿业大学 A kind of robot path planning method based on ANFIS fuzzy neural networks
CN107368776A (en) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN107710283A (en) * 2016-12-02 2018-02-16 深圳市大疆创新科技有限公司 A kind of filming control method, device and control device
CN108111818A (en) * 2017-12-25 2018-06-01 北京航空航天大学 Moving target active perception method and apparatus based on multiple-camera collaboration
CN108230437A (en) * 2017-12-15 2018-06-29 深圳市商汤科技有限公司 Scene reconstruction method and device, electronic equipment, program and medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5375897B2 (en) * 2011-08-25 2013-12-25 カシオ計算機株式会社 Image generation method, image generation apparatus, and program
CN105974940B (en) * 2016-04-29 2019-03-19 优利科技有限公司 Method for tracking target suitable for aircraft
WO2018022715A1 (en) * 2016-07-26 2018-02-01 University Of Connecticut Early prediction of an intention of a user's actions
FR3058548A1 (en) * 2016-11-09 2018-05-11 Parrot Drones DRONE COMPRISING A DEVICE FOR DETERMINING A REPRESENTATION OF A TARGET VIA A NEURON NETWORK, DETERMINING METHOD AND COMPUTER PROGRAM THEREFOR
US9940729B1 (en) * 2016-11-18 2018-04-10 Here Global B.V. Detection of invariant features for localization
CN106846152A (en) * 2016-12-22 2017-06-13 安徽保腾网络科技有限公司 Vehicle loss assessment system
CN106874914B (en) * 2017-01-12 2019-05-14 华南理工大学 A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks
CN107392218B (en) * 2017-04-11 2020-08-04 创新先进技术有限公司 Vehicle loss assessment method and device based on image and electronic equipment
CN107817820A (en) * 2017-10-16 2018-03-20 复旦大学 A kind of unmanned plane autonomous flight control method and system based on deep learning
CN108247633B (en) * 2017-12-27 2021-09-03 珠海格力节能环保制冷技术研究中心有限公司 Robot control method and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5521843A (en) * 1992-01-30 1996-05-28 Fujitsu Limited System for and method of recognizing and tracking target mark
CN101866547A (en) * 2009-04-15 2010-10-20 徐克林 Monitoring system for insurance survey
CN101969548A (en) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 Active video acquiring method and device based on binocular camera shooting
CN105652891A (en) * 2016-03-02 2016-06-08 中山大学 Unmanned gyroplane moving target autonomous tracking device and control method thereof
CN107710283A (en) * 2016-12-02 2018-02-16 深圳市大疆创新科技有限公司 A kind of filming control method, device and control device
CN106709699A (en) * 2016-12-22 2017-05-24 安徽保腾网络科技有限公司 Loss assessment method for insured vehicle
CN107368776A (en) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN107168324A (en) * 2017-06-08 2017-09-15 中国矿业大学 A kind of robot path planning method based on ANFIS fuzzy neural networks
CN108230437A (en) * 2017-12-15 2018-06-29 深圳市商汤科技有限公司 Scene reconstruction method and device, electronic equipment, program and medium
CN108111818A (en) * 2017-12-25 2018-06-01 北京航空航天大学 Moving target active perception method and apparatus based on multiple-camera collaboration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于运动方向的视角无关行为识别方法;梅雪 等;计算机工程;第38卷(第15期);第159-161页 *

Also Published As

Publication number Publication date
CN109062220A (en) 2018-12-21
CN113190013A (en) 2021-07-30
CN109062220B (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN111862296B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction system, model training method and storage medium
CN113038018B (en) Method and device for assisting user in shooting vehicle video
CN110633629A (en) Power grid inspection method, device, equipment and storage medium based on artificial intelligence
WO2018053847A1 (en) Smart inventory management system, server and method, terminal, and program product
CN108839016B (en) Robot inspection method, storage medium, computer equipment and inspection robot
CN111860352B (en) Multi-lens vehicle track full tracking system and method
CN113190013B (en) Method and device for controlling movement of terminal
CN112621765B (en) Automatic equipment assembly control method and device based on manipulator
JP2019032218A (en) Location information recording method and device
CN113936340B (en) AI model training method and device based on training data acquisition
CN110458108B (en) Manual operation real-time monitoring method, system, terminal equipment and storage medium
KR102033075B1 (en) A providing location information systme using deep-learning and method it
CN107819793A (en) Collecting method and device for robot operating system
WO2018121794A1 (en) Control method, electronic device and storage medium
CN108122243B (en) Method for robot to detect moving object
CN109816588B (en) Method, device and equipment for recording driving trajectory
CN117716702A (en) Image shooting method and device and movable platform
CN115410121A (en) Video-based automatic determination method for joint seal person, electronic device and storage medium
JP7007649B2 (en) Optical flow estimator, optical flow estimation method, optical flow estimation system, and optical flow estimation program, as well as yaw rate estimator, yaw rate estimation method, yaw rate estimation system, and yaw rate estimation program.
CN113012223B (en) Target flow monitoring method and device, computer equipment and storage medium
CN111046805B (en) Animal balance analysis method, device and computer equipment
KR102555667B1 (en) Learning data collection system and method
CN111324131B (en) Tracking monitoring method of track type inspection robot based on human body radar
CN115272302B (en) Method, equipment and system for detecting parts in image
CN116989784A (en) Visual processing method and mobile robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant