CN111626264B - Live-action feedback type driving simulation method and device and server - Google Patents

Live-action feedback type driving simulation method and device and server Download PDF

Info

Publication number
CN111626264B
CN111626264B CN202010508614.6A CN202010508614A CN111626264B CN 111626264 B CN111626264 B CN 111626264B CN 202010508614 A CN202010508614 A CN 202010508614A CN 111626264 B CN111626264 B CN 111626264B
Authority
CN
China
Prior art keywords
value
correction
corrected
video image
boundary value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010508614.6A
Other languages
Chinese (zh)
Other versions
CN111626264A (en
Inventor
梁志彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jin Baoxing Electronics (Shenzhen) Co.,Ltd.
Original Assignee
Jin Baoxing Electronics Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jin Baoxing Electronics Shenzhen Co ltd filed Critical Jin Baoxing Electronics Shenzhen Co ltd
Priority to CN202010508614.6A priority Critical patent/CN111626264B/en
Publication of CN111626264A publication Critical patent/CN111626264A/en
Application granted granted Critical
Publication of CN111626264B publication Critical patent/CN111626264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/05Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles the view from a vehicle being simulated

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of data processing, in particular to a live-action feedback type driving simulation method, a live-action feedback type driving simulation device and a server. In the method, whether a correction process of the video image aiming at the first path is activated or not is determined according to the first simulated driving score, and learning feedback of a student can be taken into consideration, so that a perfect video image correction feedback mechanism is established, the data accuracy of the second boundary value set can be ensured based on mapping of the sand table model, a preset correction coefficient can be set when the first boundary value set is corrected, the stability and the constraint when the first boundary value set is subjected to iterative correction are ensured, the iterative correction is prevented from entering infinite loop, the processing load of a server is reduced, and therefore the video image can be corrected accurately, reliably and efficiently.

Description

Live-action feedback type driving simulation method and device and server
Technical Field
The invention relates to the technical field of data processing, in particular to a live-action feedback type driving simulation method, a live-action feedback type driving simulation device and a server.
Background
With the development of the automobile industry, the current per capita automobile holding amount is gradually increased, and in order to ensure traffic safety and avoid traffic accidents, it is very necessary to train students to drive. In consideration of actual road conditions and safety, most driving training at the present stage is realized based on a simulation driving platform. Specifically, a coach or other training personnel drives a vehicle in a live-action mode and shoots a vehicle running path to form live-action video data, the live-action video data are edited and uploaded to the simulation driving platform, and a trainee can check the live-action video data of the live-action driving through the simulation driving platform and judge the driving accuracy, so that the actual driving theory of the trainee is improved, and the driving training efficiency is improved. However, the above method may be difficult to accurately display the live-action video data to some extent, which may cause a certain deviation of the trainee when the trainee is based on the simulated driving platform.
Disclosure of Invention
In order to overcome at least the above-mentioned deficiencies of the prior art, an object of the present invention is to provide a real-scene feedback type driving simulation method, device and server.
The embodiment of the invention provides a real-scene feedback type driving simulation method, which at least comprises the following steps:
when a first simulated driving score aiming at a first road section sent by a simulated driving terminal is received, determining whether to activate a correction process of a video image aiming at the first road section according to the first simulated driving score; the video image is obtained by shooting based on a live-action driving vehicle running on the first road section, the video image is stored in the simulated driving terminal, and the first simulated driving score is obtained by the simulated driving terminal according to the video image;
when the correction process is activated, acquiring a first boundary value set of the video image and a second boundary value set of a sand table model corresponding to the first segment, wherein the second boundary value set is obtained by mapping the sand table model;
determining a correction iteration value according to the first boundary value set, the second boundary value set and a preset correction coefficient; the preset correction coefficient is used for determining the iteration number of the corrected iteration value, so that the corrected iteration value represents the distortion weight between the first boundary value set and the second boundary value set;
and correcting the first boundary value set according to the correction iteration value to obtain a correction result, obtaining a correction video image according to the correction result, and replacing the video image stored in the simulation driving terminal with the correction video image.
In an alternative form, the determining whether to activate a modification process for the video image of the first road segment according to the first simulated driving score includes:
determining a second simulated driving score for each of a plurality of second road segments by the simulated driving terminal;
determining the road section length, the traffic flow, the pedestrian flow and the number of lanes of each second road section, and carrying out weighted average on each second simulated driving score according to the road section length, the traffic flow, the pedestrian flow and the number of lanes of each second road section to obtain a reference score;
determining the road section length, the traffic flow, the pedestrian flow and the number of lanes of the first road section, and determining the prediction score of the first road section according to the reference score and the road section length, the traffic flow, the pedestrian flow and the number of lanes of the first road section;
and judging whether the absolute value of the difference value between the prediction score and the first simulated driving score exceeds a set threshold value, and if so, activating the correction process of the video image of the first road section.
In an optional manner, the determining a modified iteration value according to the first boundary value set, the second boundary value set, and a preset modification coefficient includes:
according to a target feature vector between the first set of boundary values and the second set of boundary values; wherein the target feature vector is used to characterize dissimilarity between the first set of boundary values and the second set of boundary values;
copying the target eigenvectors into two, obtaining a first iteration value according to one eigenvector, and carrying out constraint processing on the other eigenvector according to a preset correction coefficient to obtain a second iteration value;
performing first convergence verification on the iteration weight in the second iteration value to obtain a second convergence interval of the second iteration value;
determining a divergence interval corresponding to the second convergence interval in the first iteration value according to the preset correction coefficient, and performing second convergence verification on the iteration weight of the divergence interval to obtain a first convergence interval of the first iteration value;
and performing intersection processing on the first convergence interval and the second convergence interval to obtain a convergence constraint condition of the preset correction coefficient, and determining the correction iteration value according to the convergence constraint condition.
In an optional manner, the modifying the first boundary value set according to the modified iteration value to obtain a modified result includes:
analyzing at least one first boundary value in the first boundary value set, determining one first boundary value from the at least one first boundary value as a reference value, and extracting features of the reference value to obtain a reference feature vector;
iteratively correcting the reference characteristic vector according to the corrected iteration value to obtain a corrected characteristic vector;
performing visualization processing on the correction feature vector to obtain a correction boundary value of the reference value;
acquiring another first boundary value in the first boundary value set as an intermediate value, and performing iterative correction on the intermediate eigenvector of the intermediate value at least based on partial iteration times of the corrected iteration value according to the difference of the intermediate value relative to the corrected boundary value to obtain a corrected intermediate eigenvector; carrying out visualization processing on the corrected intermediate feature vector to obtain a corrected intermediate value of the intermediate value;
acquiring a second boundary value in the first boundary value set as a subsequent value, and performing iterative correction on a subsequent eigenvector of the subsequent value at least based on partial iteration times of the corrected iteration value according to the difference between the subsequent value and the corrected boundary value and the adjacent previous corrected intermediate value to obtain a corrected subsequent eigenvector; carrying out visualization processing on the corrected subsequent feature vector to obtain a corrected subsequent value of the subsequent value;
and obtaining the correction result according to the correction boundary value, the correction intermediate value and the correction subsequent value.
In an alternative mode, the obtaining a corrected video image according to the correction result includes:
acquiring a first simulated driving score identifier existing in the correction result, wherein the first simulated driving score identifier in the correction result is identifier information generated at a plurality of score nodes in the video image;
determining a first image frame corresponding to each scoring node in the video image according to the first simulated driving scoring identifier at each scoring node;
sequentially performing covering processing and smoothing processing on the first image frame according to the correction result to obtain a second image frame;
and obtaining the corrected image according to the second image frame.
In an alternative manner, the acquiring the first set of boundary values of the video image includes:
dividing the video image into a plurality of image areas;
determining the gray difference between two adjacent pixels in each image area according to the average gray value of each image area; determining a first boundary value between two adjacent pixels according to the gray level difference; the first boundary value comprises a first set value, a second set value and a third set value;
and obtaining the first boundary value set of the video image according to the first boundary value of each image area.
In an optional manner, the method further comprises:
and when a third simulated driving score generated based on the corrected video image and sent by the simulated driving terminal is received, judging whether the absolute value of the difference value between the third simulated driving score and the prediction score exceeds the set threshold, and if so, activating a correction process aiming at the corrected video image.
The embodiment of the invention provides a real-scene feedback type driving simulation device, which at least comprises:
the system comprises an activation module, a correction module and a correction module, wherein the activation module is used for determining whether to activate a correction process of a video image of a first road section according to a first simulation driving score when the first simulation driving score which is sent by a simulation driving terminal and aims at the first road section is received; the video image is obtained by shooting based on a live-action driving vehicle running on the first road section, the video image is stored in the simulated driving terminal, and the first simulated driving score is obtained by the simulated driving terminal according to the video image;
an obtaining module, configured to obtain a first boundary value set of the video image and a second boundary value set of a sand table model corresponding to the first segment when the modification process is activated, where the second boundary value set is obtained by mapping the sand table model;
the determining module is used for determining a correction iteration value according to the first boundary value set, the second boundary value set and a preset correction coefficient; the preset correction coefficient is used for determining the iteration number of the corrected iteration value, so that the corrected iteration value represents the distortion weight between the first boundary value set and the second boundary value set;
and the correction module is used for correcting the first boundary value set according to the correction iteration value to obtain a correction result, obtaining a correction video image according to the correction result, and replacing the video image stored in the simulated driving terminal with the correction video image.
The embodiment of the invention provides a server, which comprises a processor, a memory and a bus, wherein the memory and the bus are connected with the processor; wherein, the processor and the memory complete mutual communication through the bus; the processor is used for calling the program instructions in the memory so as to execute the real-scene feedback type simulation driving method.
An embodiment of the present invention provides a readable storage medium, on which a program is stored, where the program, when executed by a processor, implements the real-scene feedback type driving simulation method described above.
The live-action feedback type simulation driving method, the device and the server provided by the embodiment of the invention determine whether to activate a correction process of a video image for a first road section according to a first simulation driving score, can take learning feedback of a student into account, thereby establishing a perfect video image correction feedback mechanism, and because a second boundary value set is obtained by mapping a sand table model, the data accuracy of the second boundary value set can be ensured, thereby providing a reliable data base for correction of a first boundary value set, when the first boundary value set is corrected, a preset correction coefficient can be preset, the stability and the constraint when the first boundary value set is subjected to iterative correction are ensured, the iterative correction is prevented from entering infinite loop, the processing load of the server is reduced, and thus, the video image can be corrected accurately, reliably and efficiently, therefore, the distortion of the video image is eliminated, the video image can be accurately displayed, and the trainees can be ensured to carry out simulated driving training and learning based on the accurate video image.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart of a real-scene feedback type driving simulation method according to an embodiment of the present invention.
Fig. 2 is a functional block diagram of a real-scene feedback type driving simulation apparatus according to an embodiment of the present invention.
Fig. 3 is a block diagram of a server according to an embodiment of the present invention.
Icon:
200-a real-scene feedback type driving simulation device; 201-an activation module; 202-an obtaining module; 203-a determination module; 204-a correction module;
300-a server; 301-a processor; 302-a memory; 303-bus.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The inventor finds that the live-action video data is shot through the vehicle-mounted camera, the shot live-action video data is distorted when the vehicle-mounted camera shoots, the cost is obviously increased if the distortion parameter of the lens is adjusted when the real-action video data is shot aiming at the real road condition environment every time due to different real road condition environments, and the live-action video data is displayed directly, so that the deviation occurs when the live-action video data is displayed, the simulation driving training and the learning accuracy are influenced, and therefore, the live-action video data needs to be corrected in order to avoid the display deviation caused by the distortion of the live-action video data. In addition, considering that the live-action video data is used for trainings and learning of trainees, when the live-action video data is corrected, learning feedback of the trainees needs to be taken into account, so that the reliability of correction of the live-action video data is improved.
The embodiment of the invention provides a live-action feedback type driving simulation method, a live-action feedback type driving simulation device and a server, which are used for solving the technical problem that live-action video data are difficult to accurately display in the prior art.
In order to better understand the technical solutions of the present invention, the following detailed descriptions of the technical solutions of the present invention are provided with the accompanying drawings and the specific embodiments, and it should be understood that the specific features in the embodiments and the examples of the present invention are the detailed descriptions of the technical solutions of the present invention, and are not limitations of the technical solutions of the present invention, and the technical features in the embodiments and the examples of the present invention may be combined with each other without conflict.
Fig. 1 is a flowchart of a real feedback type driving simulation method according to an embodiment of the present invention, which may include the following steps:
and step S21, when receiving a first simulated driving score for a first road section sent by a simulated driving terminal, determining whether to activate a correction process of a video image for the first road section according to the first simulated driving score.
Step S22, when the modification process is activated, obtain a first boundary value set of the video image and a second boundary value set of the sand table model corresponding to the first segment.
And step S23, determining a correction iteration value according to the first boundary value set, the second boundary value set and a preset correction coefficient.
And step S24, correcting the first boundary value set according to the correction iteration value to obtain a correction result, obtaining a corrected video image according to the correction result, and replacing the video image stored in the simulated driving terminal with the corrected video image.
In step S21, the video image is captured based on the live-action-driven vehicle traveling on the first road segment, the video image is stored in the driving simulation terminal, and the first driving simulation score is obtained by the driving simulation terminal from the video image.
It can be understood that the trainee can watch the video image through the driving simulation terminal and conduct corresponding driving simulation training and learning, but the video image may have distortion, so that the trainee can output abnormal first simulation driving scores when training and learning are conducted based on the video image. To this end, based on analyzing the first simulated driving score, it can be determined whether to activate a corrective process.
In step S22, the second boundary value set is obtained by mapping the sand table model, and since the influence of lens distortion can be eliminated when mapping the sand table model, the second boundary value set can be used as an accurate reference, so as to correct the first boundary value, and further correct the video image.
In step S23, the preset correction coefficient is used to determine the iteration number of the correction iteration value, so that the correction iteration value represents the distortion weight between the first boundary value set and the second boundary value set.
It can be understood that, through steps S21-S24, whether to activate the modification process of the video image for the first segment is determined according to the first simulated driving score, so that the learning feedback of the trainee can be taken into account, thereby establishing a perfect video image modification feedback mechanism, since the second boundary value set is obtained by mapping the sand table model, the data accuracy of the second boundary value set can be ensured, thereby providing a reliable data base for the modification of the first boundary value set, when the first boundary value set is modified, the modification coefficient can be preset, the stability and the constraint when the first boundary value set is modified iteratively are ensured, the iterative modification is prevented from entering an infinite loop, the processing load of the server is reduced, and thus, the video image can be modified accurately, reliably and efficiently, thereby eliminating the distortion of the video image, the video images can be accurately displayed, so that trainees can be ensured to carry out simulated driving training and learning based on the accurate video images.
In a specific implementation, in order to ensure the reliability of activating the correction process, it is necessary to perform a determination according to the learning feedback of different learners, and for this reason, it is necessary to take the learning feedback of the learners on other road segments into consideration, so as to determine whether the learning feedback of the learners on the current road segment is abnormal, and then activate the correction process according to the abnormal condition, so in step S21, it is determined whether to activate the correction process for the video image of the first road segment according to the first simulated driving score, which may specifically include the following:
step S211, determining a second driving simulation score of the driving simulation terminal for each of a plurality of second road segments.
Step S212, determining the road section length, the traffic flow, the pedestrian flow and the number of lanes of each second road section, and performing weighted average on each second simulated driving score according to the road section length, the traffic flow, the pedestrian flow and the number of lanes of each second road section to obtain a reference score.
Step S213, determining the road section length, the traffic flow, the pedestrian flow and the number of lanes of the first road section, and determining the prediction score of the first road section according to the reference score and the road section length, the traffic flow, the pedestrian flow and the number of lanes of the first road section.
Step S214, determining whether an absolute value of a difference between the predicted score and the first simulated driving score exceeds a set threshold, and if so, activating the correction process for the video image of the first road segment.
In step S211, when the trainee views the video image of the second road segment through the driving simulation terminal, the video image of the second road segment is corrected, so that the second driving simulation score can accurately reflect the learning level of the trainee when learning the video image of the second road segment, and it can be understood that whether the first driving simulation score is abnormal is determined based on the second driving simulation score, and the reliability and the authenticity of the determination can be ensured.
It is understood that, through steps S211 to S214, the second driving simulation scores of the second road segments can be taken into consideration, each second driving simulation score is weighted and averaged based on the length of each second road segment, the traffic flow and the number of lanes to obtain a reference score, the prediction score of the first road segment is determined according to the reference score and the length of each second road segment, the traffic flow and the number of lanes to ensure the reliability of the abnormality determination of the first driving simulation score of the first road segment, and whether the learning feedback of the learner on the current road segment is abnormal can be determined according to the absolute value of the difference between the prediction score and the first driving simulation score, so as to ensure the reliable activation of the correction process.
In a specific implementation, when a video image is corrected, it is necessary to take into account an influence generated by correction between adjacent pixel points, and avoid a cyclic influence on a correction iteration due to the influence generated by correction between adjacent pixel points, so as to reduce a load of server iterative computation, in step S23, a correction iteration value is determined according to the first boundary value set, the second boundary value set, and a preset correction coefficient, which may specifically include the following contents:
step S231, according to the target feature vector between the first boundary value set and the second boundary value set; wherein the target feature vector is used to characterize dissimilarity between the first set of boundary values and the second set of boundary values.
Step S232, the target eigenvectors are copied into two, a first iteration value is obtained according to one eigenvector, and the other eigenvector is subjected to constraint processing according to a preset correction coefficient to obtain a second iteration value.
Step S233, performing first convergence verification on the iteration weight in the second iteration value to obtain a second convergence interval of the second iteration value.
Step S234, determining a divergence interval corresponding to the second convergence interval in the first iteration value according to the preset correction coefficient, and performing second convergence verification on the iteration weight of the divergence interval to obtain a first convergence interval of the first iteration value.
Step S235, performing intersection processing on the first convergence interval and the second convergence interval to obtain a convergence constraint condition of the preset correction coefficient, and determining the correction iteration value according to the convergence constraint condition.
In this embodiment, through steps S231 to S235, iteration values (a first iteration value and a second iteration value) in two cases can be determined based on whether a preset correction coefficient is considered, and a first convergence interval of the first iteration value is determined based on two-time convergence verification on the second iteration value, so that iteration convergence and divergence due to influence between pixels in a video correction process can be comprehensively considered, thereby balancing a relationship between convergence and divergence, ensuring validity of the obtained corrected iteration value on the premise of a small number of iterations, enabling a server to realize effective correction on a video image on the premise of bearing a small processing overhead, and improving the degree of reduction of the video image.
In a specific implementation, in order to effectively reduce redundant information generated by the correction result, in step S24, the correcting the first boundary value set according to the correction iteration value to obtain the correction result may specifically include the following:
step S2411, analyzing at least one first boundary value in the first boundary value set, determining one first boundary value from the at least one first boundary value as a reference value, and extracting features of the reference value to obtain a reference feature vector.
Step S2412, performing iterative correction on the reference feature vector according to the corrected iterative value to obtain a corrected feature vector.
Step S2413, performing visualization processing on the corrected feature vector to obtain a corrected boundary value of the reference value.
Step S2414, acquiring another first boundary value in the first boundary value set as an intermediate value, and performing iterative correction on the intermediate feature vector of the intermediate value at least based on partial iteration times of the corrected iteration value according to the difference of the intermediate value relative to the corrected boundary value to obtain a corrected intermediate feature vector; and carrying out visualization processing on the corrected intermediate feature vector to obtain a corrected intermediate value of the intermediate value.
Step S2415, acquiring another first boundary value in the first boundary value set as a subsequent value, and performing iterative correction on a subsequent eigenvector of the subsequent value at least based on partial iteration times of the corrected iteration value according to the difference between the subsequent value and the corrected boundary value and the adjacent previous corrected intermediate value to obtain a corrected subsequent eigenvector; and carrying out visualization processing on the corrected subsequent feature vector to obtain a corrected subsequent value of the subsequent value.
Step S2416, obtaining the correction result according to the correction boundary value, the correction intermediate value, and the correction subsequent value.
It can be understood that, through steps S2411 to S2416, the influence generated during the correction between the boundary values can be taken into consideration, so that the situation that each first boundary value is corrected by using the complete iteration number of the corrected iteration value for iterative correction is avoided, and redundant information generated by excessive iteration numbers is avoided, thereby effectively reducing redundant information in the corrected result and improving the obtaining efficiency of the corrected result.
In an implementation, the whole video image is not used as a reference for the simulation driving training, and in order to improve the efficiency of obtaining the corrected image, in step S24, the obtaining the corrected video image according to the correction result may further include:
step S2421, acquiring a first simulated driving score identifier existing in the correction result, wherein the first simulated driving score identifier in the correction result is identifier information generated at a plurality of score nodes in the video image.
Step S2422, determining a first image frame corresponding to each scoring node in the video image according to the first simulated driving scoring marks at each scoring node.
And step S2423, sequentially performing covering processing and smoothing processing on the first image frame according to the correction result to obtain a second image frame.
Step S2424, obtaining the corrected image according to the second image frame.
It can be understood that, through steps S2421 to S2424, the first driving simulation score identifier existing in the correction result can be determined, the first image frame corresponding to each score node is determined in the video image based on the first driving simulation score identifier, and then the first image frame is subjected to the covering processing and the smoothing processing, so that each image frame in the video image does not need to be corrected, the overhead of a server is reduced, the obtaining efficiency of the corrected image is improved, and in addition, through the smoothing processing, the smooth transition between the image symbol which is not corrected and the corrected image frame can be ensured, and the phenomenon of screen flashing of the video image in the playing process is avoided.
In a specific implementation, in order to accurately determine the first boundary value set, in step S22, the acquiring the first boundary value set of the video image may specifically include the following:
step S221, dividing the video image into a plurality of image areas.
Step S222, determining the gray difference between two adjacent pixels in each image area according to the average gray value of each image area; determining a first boundary value between two adjacent pixels according to the gray level difference; the first boundary value comprises a first set value, a second set value and a third set value.
Step S223, obtaining the first boundary value set of the video image according to the first boundary value of each image region.
It can be understood that, through steps S221 to S223, the gray-scale values of different image regions of the video image can be taken into consideration, the first boundary value is avoided being determined in a one-cut manner, and the first boundary value includes the first set value "0", the second set value "0.5" and the third set value "1", and the gray-scale difference between adjacent pixels can be graded, so as to ensure the accuracy of the determined first boundary value set.
Optionally, in order to ensure reliability after the video image is corrected, on the basis of the above method, the method provided by this embodiment may further include the following:
and when a third simulated driving score generated based on the corrected video image and sent by the simulated driving terminal is received, judging whether the absolute value of the difference value between the third simulated driving score and the prediction score exceeds the set threshold, and if so, activating a correction process aiming at the corrected video image.
It is understood that, with the above, it is possible to determine whether the corrected video image is acceptable based on the third simulated driving score, thereby ensuring the reliability of the video image correction.
On the basis of the above, the embodiment of the invention provides a real-scene feedback type driving simulation device 200. Fig. 2 is a functional block diagram of a real feedback type driving simulation apparatus 200 according to an embodiment of the present invention, where the real feedback type driving simulation apparatus 200 includes:
the activation module 201 is configured to determine whether to activate a correction process of a video image for a first road segment according to a first simulated driving score when the first simulated driving score for the first road segment sent by a simulated driving terminal is received; the video image is obtained by shooting based on a live-action driving vehicle running on the first road section, the video image is stored in the driving simulation terminal, and the first driving simulation score is obtained by the driving simulation terminal according to the video image.
An obtaining module 202, configured to obtain a first boundary value set of the video image and a second boundary value set of a sand table model corresponding to the first segment when the modification process is activated, where the second boundary value set is obtained by mapping the sand table model.
A determining module 203, configured to determine a modified iteration value according to the first boundary value set, the second boundary value set, and a preset modification coefficient; the preset correction coefficient is used for determining the iteration number of the corrected iteration value, so that the corrected iteration value represents the distortion weight between the first boundary value set and the second boundary value set.
And the correction module 204 is configured to correct the first boundary value set according to the correction iteration value to obtain a correction result, obtain a corrected video image according to the correction result, and replace the video image stored in the driving simulation terminal with the corrected video image.
In an optional manner, the activation module 201 is specifically configured to:
determining a second simulated driving score for each of a plurality of second road segments by the simulated driving terminal;
determining the road section length, the traffic flow, the pedestrian flow and the number of lanes of each second road section, and carrying out weighted average on each second simulated driving score according to the road section length, the traffic flow, the pedestrian flow and the number of lanes of each second road section to obtain a reference score;
determining the road section length, the traffic flow, the pedestrian flow and the number of lanes of the first road section, and determining the prediction score of the first road section according to the reference score and the road section length, the traffic flow, the pedestrian flow and the number of lanes of the first road section;
and judging whether the absolute value of the difference value between the prediction score and the first simulated driving score exceeds a set threshold value, and if so, activating the correction process of the video image of the first road section.
In an optional manner, the determining module 203 is specifically configured to:
according to a target feature vector between the first set of boundary values and the second set of boundary values; wherein the target feature vector is used to characterize dissimilarity between the first set of boundary values and the second set of boundary values;
copying the target eigenvectors into two, obtaining a first iteration value according to one eigenvector, and carrying out constraint processing on the other eigenvector according to a preset correction coefficient to obtain a second iteration value;
performing first convergence verification on the iteration weight in the second iteration value to obtain a second convergence interval of the second iteration value;
determining a divergence interval corresponding to the second convergence interval in the first iteration value according to the preset correction coefficient, and performing second convergence verification on the iteration weight of the divergence interval to obtain a first convergence interval of the first iteration value;
and performing intersection processing on the first convergence interval and the second convergence interval to obtain a convergence constraint condition of the preset correction coefficient, and determining the correction iteration value according to the convergence constraint condition.
In an optional manner, the modification module 204 is specifically configured to:
analyzing at least one first boundary value in the first boundary value set, determining one first boundary value from the at least one first boundary value as a reference value, and extracting features of the reference value to obtain a reference feature vector;
iteratively correcting the reference characteristic vector according to the corrected iteration value to obtain a corrected characteristic vector;
performing visualization processing on the correction feature vector to obtain a correction boundary value of the reference value;
acquiring another first boundary value in the first boundary value set as an intermediate value, and performing iterative correction on the intermediate eigenvector of the intermediate value at least based on partial iteration times of the corrected iteration value according to the difference of the intermediate value relative to the corrected boundary value to obtain a corrected intermediate eigenvector; carrying out visualization processing on the corrected intermediate feature vector to obtain a corrected intermediate value of the intermediate value;
acquiring a second boundary value in the first boundary value set as a subsequent value, and performing iterative correction on a subsequent eigenvector of the subsequent value at least based on partial iteration times of the corrected iteration value according to the difference between the subsequent value and the corrected boundary value and the adjacent previous corrected intermediate value to obtain a corrected subsequent eigenvector; carrying out visualization processing on the corrected subsequent feature vector to obtain a corrected subsequent value of the subsequent value;
and obtaining the correction result according to the correction boundary value, the correction intermediate value and the correction subsequent value.
In an optional manner, the modification module 204 is specifically configured to:
acquiring a first simulated driving score identifier existing in the correction result, wherein the first simulated driving score identifier in the correction result is identifier information generated at a plurality of score nodes in the video image;
determining a first image frame corresponding to each scoring node in the video image according to the first simulated driving scoring identifier at each scoring node;
sequentially performing covering processing and smoothing processing on the first image frame according to the correction result to obtain a second image frame;
and obtaining the corrected image according to the second image frame.
In an optional manner, the obtaining module 202 is specifically configured to:
dividing the video image into a plurality of image areas;
determining the gray difference between two adjacent pixels in each image area according to the average gray value of each image area; determining a first boundary value between two adjacent pixels according to the gray level difference; the first boundary value comprises a first set value, a second set value and a third set value;
and obtaining the first boundary value set of the video image according to the first boundary value of each image area.
In an optional manner, the activation module 201 is further configured to:
and when a third simulated driving score generated based on the corrected video image and sent by the simulated driving terminal is received, judging whether the absolute value of the difference value between the third simulated driving score and the prediction score exceeds the set threshold, and if so, activating a correction process aiming at the corrected video image.
The server 300 includes a processor and a memory, the activation module 201, the obtaining module 202, the determining module 203, the modifying module 204, and the like are all stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, and the video image is corrected accurately, reliably and efficiently by adjusting the kernel parameters, so that the distortion of the video image is eliminated, the video image can be accurately displayed, and the trainees can be ensured to carry out simulated driving training and learning based on the accurate video image.
An embodiment of the present invention provides a readable storage medium, on which a program is stored, which, when executed by a processor, implements the live-action feedback-type driving simulation method.
The embodiment of the invention provides a processor, which is used for running a program, wherein the live-action feedback type driving simulation method is executed when the program runs.
In the embodiment of the present invention, as shown in fig. 3, the server 300 includes at least one processor 301, and at least one memory 302 and a bus connected to the processor 301; wherein, the processor 301 and the memory 302 complete the communication with each other through the bus 303; the processor 301 is configured to call the program instructions in the memory 302 to execute the real-scene feedback type driving simulation method. The server 300 herein may be a server, a PC, a PAD, a cell phone, etc.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, servers (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing server to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing server, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a server includes one or more processors (CPUs), memory, and a bus. The server may also include input/output interfaces, network interfaces, and the like.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip. The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage servers, or any other non-transmission medium that can be used to store information that can be accessed by a computing server. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or server that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or server. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or server comprising the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (8)

1. A real-scene feedback type driving simulation method is characterized by comprising the following steps:
when a first simulated driving score aiming at a first road section sent by a simulated driving terminal is received, determining whether to activate a correction process of a video image aiming at the first road section according to the first simulated driving score; the video image is obtained by shooting based on a live-action driving vehicle running on the first road section, the video image is stored in the simulated driving terminal, and the first simulated driving score is obtained by the simulated driving terminal according to the video image;
when the correction process is activated, acquiring a first boundary value set of the video image and a second boundary value set of a sand table model corresponding to the first segment, wherein the second boundary value set is obtained by mapping the sand table model;
determining a correction iteration value according to the first boundary value set, the second boundary value set and a preset correction coefficient; the preset correction coefficient is used for determining the iteration number of the corrected iteration value, so that the corrected iteration value represents the distortion weight between the first boundary value set and the second boundary value set;
correcting the first boundary value set according to the correction iteration value to obtain a correction result, obtaining a correction video image according to the correction result, and replacing the video image stored in the simulated driving terminal with the correction video image;
the obtaining of the corrected video image according to the correction result includes:
acquiring a first simulated driving score identifier existing in the correction result, wherein the first simulated driving score identifier in the correction result is identifier information generated at a plurality of score nodes in the video image;
determining a first image frame corresponding to each scoring node in the video image according to the first simulated driving scoring identifier at each scoring node;
sequentially performing covering processing and smoothing processing on the first image frame according to the correction result to obtain a second image frame;
obtaining the corrected image according to the second image frame;
the modifying the first boundary value set according to the modified iteration value to obtain a modified result includes:
analyzing at least one first boundary value in the first boundary value set, determining one first boundary value from the at least one first boundary value as a reference value, and extracting features of the reference value to obtain a reference feature vector;
iteratively correcting the reference characteristic vector according to the corrected iteration value to obtain a corrected characteristic vector;
performing visualization processing on the correction feature vector to obtain a correction boundary value of the reference value;
acquiring another first boundary value in the first boundary value set as an intermediate value, and performing iterative correction on the intermediate eigenvector of the intermediate value at least based on partial iteration times of the corrected iteration value according to the difference of the intermediate value relative to the corrected boundary value to obtain a corrected intermediate eigenvector; carrying out visualization processing on the corrected intermediate feature vector to obtain a corrected intermediate value of the intermediate value;
acquiring a second boundary value in the first boundary value set as a subsequent value, and performing iterative correction on a subsequent eigenvector of the subsequent value at least based on partial iteration times of the corrected iteration value according to the difference between the subsequent value and the corrected boundary value and the adjacent previous corrected intermediate value to obtain a corrected subsequent eigenvector; carrying out visualization processing on the corrected subsequent feature vector to obtain a corrected subsequent value of the subsequent value;
and obtaining the correction result according to the correction boundary value, the correction intermediate value and the correction subsequent value.
2. The method of claim 1, wherein determining whether to activate a corrective process for the video image of the first road segment based on the first simulated driving score comprises:
determining a second simulated driving score for each of a plurality of second road segments by the simulated driving terminal;
determining the road section length, the traffic flow, the pedestrian flow and the number of lanes of each second road section, and carrying out weighted average on each second simulated driving score according to the road section length, the traffic flow, the pedestrian flow and the number of lanes of each second road section to obtain a reference score;
determining the road section length, the traffic flow, the pedestrian flow and the number of lanes of the first road section, and determining the prediction score of the first road section according to the reference score and the road section length, the traffic flow, the pedestrian flow and the number of lanes of the first road section;
and judging whether the absolute value of the difference value between the prediction score and the first simulated driving score exceeds a set threshold value, and if so, activating the correction process of the video image of the first road section.
3. The method according to claim 1, wherein determining a modified iteration value according to the first set of boundary values, the second set of boundary values and a preset modification coefficient comprises:
according to a target feature vector between the first set of boundary values and the second set of boundary values; wherein the target feature vector is used to characterize dissimilarity between the first set of boundary values and the second set of boundary values;
copying the target eigenvectors into two, obtaining a first iteration value according to one eigenvector, and carrying out constraint processing on the other eigenvector according to a preset correction coefficient to obtain a second iteration value;
performing first convergence verification on the iteration weight in the second iteration value to obtain a second convergence interval of the second iteration value;
determining a divergence interval corresponding to the second convergence interval in the first iteration value according to the preset correction coefficient, and performing second convergence verification on the iteration weight of the divergence interval to obtain a first convergence interval of the first iteration value;
and performing intersection processing on the first convergence interval and the second convergence interval to obtain a convergence constraint condition of the preset correction coefficient, and determining the correction iteration value according to the convergence constraint condition.
4. The method according to any one of claims 1-3, wherein said obtaining the first set of boundary values of the video image comprises:
dividing the video image into a plurality of image areas;
determining the gray difference between two adjacent pixels in each image area according to the average gray value of each image area; determining a first boundary value between two adjacent pixels according to the gray level difference; the first boundary value comprises a first set value, a second set value and a third set value;
and obtaining the first boundary value set of the video image according to the first boundary value of each image area.
5. The method of claim 2, further comprising:
and when a third simulated driving score generated based on the corrected video image and sent by the simulated driving terminal is received, judging whether the absolute value of the difference value between the third simulated driving score and the prediction score exceeds the set threshold, and if so, activating a correction process aiming at the corrected video image.
6. A live-action feedback type simulated driving apparatus, comprising:
the system comprises an activation module, a correction module and a correction module, wherein the activation module is used for determining whether to activate a correction process of a video image of a first road section according to a first simulation driving score when the first simulation driving score which is sent by a simulation driving terminal and aims at the first road section is received; the video image is obtained by shooting based on a live-action driving vehicle running on the first road section, the video image is stored in the simulated driving terminal, and the first simulated driving score is obtained by the simulated driving terminal according to the video image;
an obtaining module, configured to obtain a first boundary value set of the video image and a second boundary value set of a sand table model corresponding to the first segment when the modification process is activated, where the second boundary value set is obtained by mapping the sand table model;
the determining module is used for determining a correction iteration value according to the first boundary value set, the second boundary value set and a preset correction coefficient; the preset correction coefficient is used for determining the iteration number of the corrected iteration value, so that the corrected iteration value represents the distortion weight between the first boundary value set and the second boundary value set;
a correction module, configured to correct the first boundary value set according to the correction iteration value to obtain a correction result, obtain a corrected video image according to the correction result, and replace the video image stored in the driving simulation terminal with the corrected video image, where the correction module is specifically configured to: analyzing at least one first boundary value in the first boundary value set, determining one first boundary value from the at least one first boundary value as a reference value, and extracting features of the reference value to obtain a reference feature vector; iteratively correcting the reference characteristic vector according to the corrected iteration value to obtain a corrected characteristic vector; performing visualization processing on the correction feature vector to obtain a correction boundary value of the reference value; acquiring another first boundary value in the first boundary value set as an intermediate value, and performing iterative correction on the intermediate eigenvector of the intermediate value at least based on partial iteration times of the corrected iteration value according to the difference of the intermediate value relative to the corrected boundary value to obtain a corrected intermediate eigenvector; carrying out visualization processing on the corrected intermediate feature vector to obtain a corrected intermediate value of the intermediate value; acquiring a second boundary value in the first boundary value set as a subsequent value, and performing iterative correction on a subsequent eigenvector of the subsequent value at least based on partial iteration times of the corrected iteration value according to the difference between the subsequent value and the corrected boundary value and the adjacent previous corrected intermediate value to obtain a corrected subsequent eigenvector; carrying out visualization processing on the corrected subsequent feature vector to obtain a corrected subsequent value of the subsequent value; and obtaining the correction result according to the correction boundary value, the correction intermediate value and the correction subsequent value.
7. A server comprising a processor and a memory and bus connected to the processor; wherein, the processor and the memory complete mutual communication through the bus; the processor is configured to call program instructions in the memory to perform the live-action feedback simulated driving method of any of claims 1-5.
8. A readable storage medium, having stored thereon a program which, when executed by a processor, implements a live-action feedback-type simulated driving method as claimed in any one of claims 1 to 5.
CN202010508614.6A 2020-06-06 2020-06-06 Live-action feedback type driving simulation method and device and server Active CN111626264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010508614.6A CN111626264B (en) 2020-06-06 2020-06-06 Live-action feedback type driving simulation method and device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010508614.6A CN111626264B (en) 2020-06-06 2020-06-06 Live-action feedback type driving simulation method and device and server

Publications (2)

Publication Number Publication Date
CN111626264A CN111626264A (en) 2020-09-04
CN111626264B true CN111626264B (en) 2020-12-08

Family

ID=72271473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010508614.6A Active CN111626264B (en) 2020-06-06 2020-06-06 Live-action feedback type driving simulation method and device and server

Country Status (1)

Country Link
CN (1) CN111626264B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112671932B (en) * 2021-01-25 2021-12-03 中林云信(上海)网络技术有限公司 Data processing method based on big data and cloud computing node

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871291A (en) * 2014-04-06 2014-06-18 门立山 Driving simulation system for driving license getting
CN106327944A (en) * 2016-08-26 2017-01-11 北京大象科技有限公司 Vehicle simulation control method and system for urban rail transit
CN108376492A (en) * 2018-03-16 2018-08-07 成都博士信智能科技发展有限公司 Traffic equipment analogy method and system
CN108509832A (en) * 2017-02-28 2018-09-07 三星电子株式会社 Method and apparatus for generating virtual track
CN108595844A (en) * 2018-04-26 2018-09-28 成都博士信智能科技发展有限公司 Automatic Pilot control method and device

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040158476A1 (en) * 2003-02-06 2004-08-12 I-Sim, Llc Systems and methods for motor vehicle learning management
CN100589148C (en) * 2007-07-06 2010-02-10 浙江大学 Method for implementing automobile driving analog machine facing to disciplinarian
US11436935B2 (en) * 2009-09-29 2022-09-06 Advanced Training Systems, Inc System, method and apparatus for driver training system with stress management
DE102013201168A1 (en) * 2013-01-24 2014-07-24 Ford Global Technologies, Llc If necessary activatable remote control system for motor vehicles
CN105056500B (en) * 2015-07-22 2017-08-29 陈飞 A kind of situation simulation training/games system
US10114779B2 (en) * 2016-04-22 2018-10-30 Dell Products L.P. Isolating a redirected USB device to a set of applications
JP6298130B2 (en) * 2016-09-14 2018-03-20 株式会社バンダイナムコエンターテインメント Simulation system and program
JP6373920B2 (en) * 2016-09-14 2018-08-15 株式会社バンダイナムコエンターテインメント Simulation system and program
CN106205273A (en) * 2016-09-20 2016-12-07 山西省交通科学研究院 A kind of Vehicle driving simulator based on VR analogue technique and method
CN106991811B (en) * 2017-05-03 2019-07-05 同济大学 Expressway exit ring road upstream trackside road information optimum design method based on drive simulation experiment porch
CN108305524A (en) * 2018-01-26 2018-07-20 北京工业大学 Immersion driving behavior antidote and system based on drive simulation platform
CN109597317B (en) * 2018-12-26 2022-03-18 广州小鹏汽车科技有限公司 Self-learning-based vehicle automatic driving method and system and electronic equipment
CN110288874A (en) * 2019-04-28 2019-09-27 深圳市赛亿科技开发有限公司 A kind of virtual driving control method and system
CN110404261B (en) * 2019-08-20 2023-04-28 网易(杭州)网络有限公司 Method and device for constructing virtual road network in game
CN110837697A (en) * 2019-10-25 2020-02-25 华南理工大学 Intelligent traffic simulation system and method for intelligent vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871291A (en) * 2014-04-06 2014-06-18 门立山 Driving simulation system for driving license getting
CN106327944A (en) * 2016-08-26 2017-01-11 北京大象科技有限公司 Vehicle simulation control method and system for urban rail transit
CN108509832A (en) * 2017-02-28 2018-09-07 三星电子株式会社 Method and apparatus for generating virtual track
CN108376492A (en) * 2018-03-16 2018-08-07 成都博士信智能科技发展有限公司 Traffic equipment analogy method and system
CN108595844A (en) * 2018-04-26 2018-09-28 成都博士信智能科技发展有限公司 Automatic Pilot control method and device

Also Published As

Publication number Publication date
CN111626264A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
US20170316127A1 (en) Method and apparatus for constructing testing scenario for driverless vehicle
CN111750878B (en) Vehicle pose correction method and device
CN113033029A (en) Automatic driving simulation method and device, electronic equipment and storage medium
CN111709476A (en) Knowledge distillation-based small classification model training method and device
CN112307978B (en) Target detection method and device, electronic equipment and readable storage medium
CN111353505B (en) Device based on network model capable of realizing semantic segmentation and depth of field estimation jointly
CN111340028A (en) Text positioning method and device, electronic equipment and storage medium
CN113313716B (en) Training method and device for automatic driving semantic segmentation model
CN111626264B (en) Live-action feedback type driving simulation method and device and server
CN113205070A (en) Visual perception algorithm optimization method and system
US12106575B2 (en) Method for operating a driver assistance system of a vehicle and driver assistance system for a vehicle
JP2020119555A (en) Learning method and learning device for reducing distortion occurred in warped image generated in process of stabilizing jittered image by using gan to enhance fault tolerance and fluctuation robustness in extreme situations, and testing method and testing device using the same
CN111753846A (en) Website verification method, device, equipment and storage medium based on RPA and AI
CN111104941A (en) Image direction correcting method and device and electronic equipment
CN111274863A (en) Text prediction method based on text peak probability density
CN115689946B (en) Image restoration method, electronic device and computer program product
CN113111872B (en) Training method and device of image recognition model, electronic equipment and storage medium
CN115439450A (en) Video picture quality evaluation method, device, equipment and storage medium for automatic driving
CN115063507A (en) Method and device for drawing virtual lane line
CN111368752B (en) Vehicle damage analysis method and device
CN113315995A (en) Method and device for improving video quality, readable storage medium and electronic equipment
CN111369373A (en) Vehicle interior damage determination method and device
WO2021130951A1 (en) Object-tracking device, object-tracking method, and recording medium
CN117808437B (en) Traffic management method, equipment and medium based on virtual simulation technology
CN112907705B (en) Correction image generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201123

Address after: 518000 floor 1-5, building B, No. 9, Baofu Road, Baolai Industrial Zone, shangmugu community, Pinghu street, Longgang District, Shenzhen City, Guangdong Province

Applicant after: Jin Baoxing Electronics (Shenzhen) Co.,Ltd.

Address before: No.103, entrepreneurship base, Dongguan Institute of technology, No.1, Songshanhu University Road, Dalang Town, Dongguan City, Guangdong Province

Applicant before: Liang Zhibin

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant