CN115841763A - Shooting control method and device based on demand recognition in driving mode - Google Patents

Shooting control method and device based on demand recognition in driving mode Download PDF

Info

Publication number
CN115841763A
CN115841763A CN202211720089.XA CN202211720089A CN115841763A CN 115841763 A CN115841763 A CN 115841763A CN 202211720089 A CN202211720089 A CN 202211720089A CN 115841763 A CN115841763 A CN 115841763A
Authority
CN
China
Prior art keywords
vehicle
terminal equipment
vehicles
smart phone
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211720089.XA
Other languages
Chinese (zh)
Other versions
CN115841763B (en
Inventor
李明
白颂荣
周成梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xihua Technology Co Ltd
Original Assignee
Shenzhen Xihua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xihua Technology Co Ltd filed Critical Shenzhen Xihua Technology Co Ltd
Priority to CN202211720089.XA priority Critical patent/CN115841763B/en
Publication of CN115841763A publication Critical patent/CN115841763A/en
Application granted granted Critical
Publication of CN115841763B publication Critical patent/CN115841763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application relates to the technical field of general data processing in the Internet industry, in particular to a shooting control method and device based on demand identification in a driving mode. The method is applied to an automatic driving area controller on a first vehicle, the first vehicle is also provided with a smart phone support and a terminal device, and the method comprises the following steps: if it is determined that a third vehicle matched with the reference vehicle characteristic information exists around the first vehicle and the terminal device exists on the smart phone support, determining a target clamping height corresponding to the smart phone support, sending a first instruction to the smart phone support to enable the smart phone support to be adjusted to the target clamping height, and sending a second instruction to the terminal device to enable the terminal device to start a video recording function of the camera and record a video of the third vehicle, and stopping the video recording until the third vehicle does not exist in a view range of the camera of the terminal device. By adopting the method, the driving experience and the safety of the vehicle can be improved.

Description

Shooting control method and device based on demand recognition in driving mode
Technical Field
The application relates to the technical field of general data processing in the Internet industry, in particular to a shooting control method and device based on demand identification in a driving mode.
Background
With the rapid development of scientific technology, the application of the automobile electronic technology enables the automobile to be more intelligent, and the experience and life happiness of users are improved. However, in the existing smart car, when a driver wants to shoot an interested vehicle to form a short video in the driving process, the driver needs to take the mobile phone down from the mobile phone support and start the camera shooting function to shoot, the whole process consumes a long time, the best shooting opportunity may be missed, so that the shooting effect is not ideal, and certain potential safety hazards exist when the driver uses the mobile phone to shoot in the driving process. Therefore, how to safely and high-quality photograph the vehicle in which the user is interested during driving has become a current research focus.
Disclosure of Invention
The embodiment of the application provides a shooting control method and device based on demand identification in a driving mode.
In a first aspect, an embodiment of the present invention provides a shooting control method based on demand identification in a driving mode, where the method is applied to an automatic driving area controller on a first vehicle, the first vehicle is further provided with a smart phone support and a terminal device, and the automatic driving area controller is in communication connection with the smart phone support and the terminal device. The method comprises the following steps: and if the terminal equipment is detected to be in the charging state and the driving mode of the terminal equipment is detected to be in the opening state, acquiring the reference vehicle characteristic information of the reference vehicle provided by the user. And acquiring a current environment image of the first vehicle, which is acquired by a vehicle body area controller on the first vehicle through a vehicle-mounted camera. If one or more second vehicles are determined to exist in the periphery of the first vehicle according to the environment image, vehicle feature information of the one or more second vehicles is extracted, and whether a third vehicle matched with the reference vehicle feature information exists in the one or more second vehicles is determined according to the reference vehicle feature information and the vehicle feature information of the one or more second vehicles. And if the third vehicle is determined to exist, determining whether the terminal equipment exists on the smart phone support or not. If the terminal equipment is determined to exist on the smart phone support, determining a target clamping height corresponding to the smart phone support, and sending a first instruction to the smart phone support so that the smart phone support is adjusted to the target clamping height, wherein the first instruction comprises the target clamping height, and when the clamping height of the smart phone support is the target clamping height, an original view range of a camera of the terminal equipment comprises the third vehicle. And sending a second instruction to the terminal equipment so that the terminal equipment starts a video recording function of the camera and records the third vehicle, and stopping video recording until the third vehicle does not exist in the view finding range of the camera of the terminal equipment. If the automatic driving area controller determines that the terminal equipment does not exist on the smart phone support, generating and outputting reminding information, wherein the reminding information is used for reminding the user of placing the terminal equipment on the smart phone support; determining whether the terminal equipment exists on the smart phone support again; and if the situation that the terminal equipment does not exist on the smart phone support is determined again, keeping the current system state.
In the embodiment of the application, the automatic driving area controller can judge whether a third vehicle matched with the reference vehicle characteristic information exists around the first vehicle according to the reference vehicle characteristic information of the reference vehicle provided by a user and a current environment image of the first vehicle acquired by the vehicle body area controller on the first vehicle through the vehicle-mounted camera. Further, the autopilot domain controller may control a terminal device on the first vehicle to photograph the third vehicle. By adopting the method, the automatic driving domain controller can identify and control the terminal equipment to shoot the third vehicle based on the requirements of the user in the driving mode, so that the driving experience and the safety of the user can be improved.
With reference to the first aspect, in a possible implementation manner, the determining a target clamping height corresponding to the smartphone holder includes: and acquiring the current clamping height of the smart phone support and the posture of the terminal equipment. And determining whether the terminal equipment is inverted or not according to the posture of the terminal equipment. And if the terminal equipment is determined not to be inverted, determining the size of the terminal equipment and the position parameters of a camera of the terminal equipment on the terminal equipment according to a preset terminal equipment model. And determining a target clamping height corresponding to the smart phone support according to the current clamping height of the smart phone support, the size of the terminal equipment, the position parameter of a camera of the terminal equipment on the terminal equipment, the pre-stored geometric model characteristic parameter between the base of the smart phone support and the engine hood of the first vehicle and the first relative position of the third vehicle and the base of the smart phone support.
With reference to the first aspect, in a possible implementation manner, the determining a target clamping height corresponding to the smartphone holder according to a current clamping height of the smartphone holder, an unknown parameter of the terminal device, a pre-stored geometric model characteristic parameter between a smartphone holder base and a bonnet of the first vehicle, and a first relative position of the third vehicle and the smartphone holder base includes: and determining a second relative position between the camera of the terminal equipment and the base of the smart phone support according to the current clamping height of the smart phone support, the size of the terminal equipment and the position parameters of the camera at the terminal equipment. And determining that the original view range of the camera of the terminal equipment comprises a first clamping height range of the third vehicle and a view range corresponding to the first clamping height range according to the second relative position, the first relative position of the third vehicle and the base of the smartphone support, the stroke of the smartphone support and shooting parameters of the camera of the terminal equipment. According to the pre-stored geometric model characteristic parameters between the smart phone support base and the engine hood of the first vehicle and the view range corresponding to the first clamping height range, determining a second clamping height range, wherein the actual view effect of the second clamping height range cannot be shielded by the engine hood of the first vehicle. And selecting the minimum value in the second clamping height range as a target clamping height.
With reference to the first aspect, in a possible implementation manner, the determining that an original view range of a camera of the terminal device includes a first clamping height range of the third vehicle and a view range corresponding to the first clamping height range according to the second relative position, the first relative position of the third vehicle and the smartphone holder base, the stroke of the smartphone holder, and shooting parameters of the camera of the terminal device includes: and determining a relation set between the clamping height of the smart phone support and the camera viewing range of the terminal equipment according to the second relative position, the stroke of the smart phone support and the shooting parameters of the camera of the terminal equipment. And determining that the original view range of the camera of the terminal equipment comprises a first clamping height range of the third vehicle and a view range corresponding to the first clamping height range according to the relation set and the first relative position.
With reference to the first aspect, in a possible implementation manner, the obtaining reference vehicle feature information of a reference vehicle provided by a user includes: and acquiring the number of the reference vehicles provided by the user. If the number of the reference vehicles provided by the user is determined to be 1, acquiring the vehicle brand and the vehicle profile of the reference vehicle, and determining the vehicle brand and the vehicle profile of the reference vehicle as the reference vehicle characteristic information of the reference vehicle.
With reference to the first aspect, in one possible implementation manner, the method further includes: and if the number of the reference vehicles provided by the user is determined to be larger than 1, acquiring the vehicle brand of each reference vehicle in the multiple reference vehicles. And determining a target vehicle brand according to the vehicle brand of each of the plurality of reference vehicles, wherein the proportion of the reference vehicles belonging to the target vehicle brand in the plurality of reference vehicles is equal to or greater than a preset proportion. And obtaining the vehicle contour of each reference vehicle in the plurality of reference vehicles, and determining the target vehicle contour corresponding to the plurality of reference vehicles according to the vehicle contour of each reference vehicle. And determining reference vehicle characteristic information of the reference vehicle according to the brand of the target vehicle and the outline of the target vehicle.
With reference to the first aspect, in one possible implementation manner, the determining a target vehicle brand according to the vehicle brands of the plurality of reference vehicles includes: dividing the plurality of reference vehicles into N1 first reference vehicle sets according to the reference vehicle brands, wherein the vehicle brands corresponding to the first reference vehicle sets in the N1 first reference vehicle sets are different, the reference vehicles included in the first reference vehicle sets belong to the vehicle brands corresponding to the first reference vehicle sets, and N1 is a positive integer equal to or greater than 1. And determining N2 target reference vehicle sets from the N1 first reference vehicle sets according to the number of reference vehicles contained in each first reference vehicle set, wherein the occupation ratio of reference vehicles contained in each target reference vehicle set in the N2 target reference vehicle sets is equal to or greater than a preset occupation ratio, and N2 is a positive integer equal to or greater than 1. And determining the vehicle brand corresponding to each target reference vehicle set in the N2 target reference vehicle sets as a target vehicle brand.
With reference to the first aspect, in a possible implementation manner, the determining, according to the vehicle profiles of the reference vehicles, target profiles corresponding to the plurality of reference vehicles includes: and if the plurality of reference vehicles are determined to be two reference vehicles, judging whether the similarity of the vehicle profiles of the two reference vehicles is greater than a first preset similarity. And if the similarity of the vehicle profiles of the two reference vehicles is determined to be equal to or greater than the first preset similarity, determining the vehicle profile of any one of the two reference vehicles as a target vehicle profile. And if the similarity of the vehicle profiles of the two reference vehicles is smaller than the first preset similarity, determining the vehicle profiles of the two reference vehicles as the target vehicle profiles.
With reference to the first aspect, in one possible implementation manner, the method further includes: if the plurality of reference vehicles are determined to be three or more reference vehicles, whether the similarity of the vehicle profiles of any two of the three or more reference vehicles is smaller than the first preset similarity is judged. If the similarity of the vehicle profiles of any two of the three or more reference vehicles is smaller than the first preset similarity, determining the vehicle profiles of the three or more reference vehicles as target vehicle profiles. If it is determined that the similarity of the vehicle profiles of any two of the three or more reference vehicles is not less than the first preset similarity, determining one or more second reference vehicle sets from the three or more reference vehicles according to the vehicle profiles of the three or more reference vehicles, wherein each second reference vehicle set of the one or more second reference vehicle sets comprises at least two reference vehicles, and the similarity of the vehicle profiles of any two of the at least two reference vehicles included in each second reference vehicle set is equal to or greater than the first preset similarity. And determining a vehicle contour corresponding to each second reference vehicle set according to the vehicle contour of the reference vehicle contained in each second reference vehicle set in the one or more second reference vehicle sets, and determining the vehicle contour corresponding to each second reference vehicle set as a target vehicle contour.
With reference to the first aspect, in one possible implementation manner, the determining, according to the reference vehicle characteristic information and the vehicle characteristic information of the one or more second vehicles, whether a third vehicle matching the reference vehicle characteristic information exists in the one or more second vehicles includes: performing the following determination operation for any of the one or more second vehicles: and acquiring the vehicle brand and the vehicle profile of any second vehicle. And judging whether the vehicle brand and the vehicle outline of any second vehicle are matched with the reference vehicle characteristic information. And if the vehicle brand and the vehicle profile of any second vehicle are determined to be matched with the reference vehicle characteristic information, determining any second vehicle as a third vehicle. Performing the determination operation on the one or more second vehicles, respectively, to determine whether there is a third vehicle that matches the reference vehicle characteristic information among the one or more second vehicles.
With reference to the first aspect, in one possible implementation manner, the determining whether the vehicle brand and the vehicle profile of the second vehicle match the reference vehicle feature information includes: and if the vehicle brand of any second vehicle is determined to be the same as the target vehicle brand, and the similarity between the vehicle profile of any second vehicle and the target vehicle profile is determined to be equal to or greater than a second preset similarity, determining that the vehicle brand and the vehicle profile of any second vehicle are matched with the reference vehicle feature information.
With reference to the first aspect, in one possible implementation manner, the method further includes: and realizing synchronous video recording of the third vehicle and the camera of the terminal equipment through the vehicle-mounted camera on the first vehicle. And obtaining target video content for the third vehicle according to the video content of the third vehicle and the video content fusion of the camera of the terminal equipment by using the timestamp as a mark.
In a second aspect, embodiments of the present invention provide a shooting control apparatus based on demand recognition in a driving mode. The device comprises: the device comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring reference vehicle characteristic information of a reference vehicle provided by a user if the terminal device is detected to be in a charging state and the driving mode of the terminal device is in an opening state. The obtaining unit is used for obtaining the current environment image of the first vehicle, which is acquired by a vehicle body area controller on the first vehicle through a vehicle-mounted camera. And the processing unit is used for extracting the vehicle characteristic information of one or more second vehicles if the surroundings of the first vehicle are determined to have one or more second vehicles according to the environment image, and determining whether a third vehicle matched with the reference vehicle characteristic information exists in the one or more second vehicles according to the reference vehicle characteristic information and the vehicle characteristic information of the one or more second vehicles. And the determining unit is used for determining whether the terminal equipment exists on the smart phone support or not if the third vehicle exists. The processing unit is configured to determine a target clamping height corresponding to the smartphone support and send a first instruction to the smartphone support so that the smartphone support is adjusted to the target clamping height, and the terminal device sends a second instruction so that the terminal device starts a video recording function of a camera and records a third vehicle until the third vehicle does not exist in a viewing range of the camera of the terminal device, wherein the first instruction includes the target clamping height, and when the clamping height of the smartphone support is the target clamping height, an original viewing range of the camera of the terminal device includes the third vehicle. The processing unit is used for generating and outputting reminding information if the terminal equipment does not exist on the smart phone support; determining whether the terminal equipment exists on the smart phone support again; and if the fact that the terminal equipment does not exist on the smart phone support is determined again, keeping the current system state, wherein the reminding information is used for reminding the user of placing the terminal equipment on the smart phone support.
With reference to the second aspect, in one possible implementation manner, the apparatus includes: and the acquisition unit is used for acquiring the current clamping height of the smart phone support and the posture of the terminal equipment. And the determining unit is used for determining whether the terminal equipment is inverted or not according to the posture of the terminal equipment. And the processing unit is used for determining the size of the terminal equipment and the position of a camera of the terminal equipment on the terminal equipment according to a preset terminal equipment model if the terminal equipment is determined not to be inverted. And the processing unit is used for determining a target clamping height corresponding to the smart phone support according to the current clamping height of the smart phone support, the size of the terminal equipment, the position parameter of a camera of the terminal equipment on the terminal equipment, the pre-stored geometric model characteristic parameter between the base of the smart phone support and the engine hood of the first vehicle and the first relative position between the third vehicle and the base of the smart phone support.
With reference to the second aspect, in a possible implementation manner, the apparatus further includes: and selecting a unit. And the processing unit is used for determining a second relative position between the camera of the terminal equipment and the base of the intelligent mobile phone support according to the current clamping height of the intelligent mobile phone support, the size of the terminal equipment and the position parameter of the camera of the terminal equipment on the terminal equipment. And the processing unit is used for determining that the original view range of the camera of the terminal equipment comprises a first clamping height range of the third vehicle and a view range corresponding to the first clamping height range according to the second relative position, the first relative position of the third vehicle and the base of the smart phone support, the stroke of the smart phone support and the shooting parameters of the camera of the terminal equipment. And the processing unit is used for determining a second clamping height range, the actual framing effect of which cannot be shielded by the engine hood of the first vehicle, according to the pre-stored geometric model characteristic parameters between the smart phone support base and the engine hood of the first vehicle and the framing range corresponding to the first clamping height range. And the selecting unit is used for selecting the minimum value in the second clamping height range as the target clamping height.
With reference to the second aspect, in one possible implementation manner, the apparatus includes: and the processing unit is used for determining a relation set between the clamping height of the smart phone support and the camera viewing range of the terminal equipment according to the second relative position, the stroke of the smart phone support and the shooting parameters of the camera of the terminal equipment. And the processing unit is used for determining that the original view range of the camera of the terminal equipment comprises a first clamping height range of the third vehicle and a view range corresponding to the first clamping height range according to the relation set and the first relative position. With reference to the second aspect, in one possible implementation manner, the apparatus includes: and the acquisition unit is used for acquiring the number of the reference vehicles provided by the user. And the processing unit is used for acquiring the vehicle brand and the vehicle contour of the reference vehicle and determining the vehicle brand and the vehicle contour of the reference vehicle as the reference vehicle characteristic information of the reference vehicle if the number of the reference vehicles provided by the user is determined to be 1.
With reference to the second aspect, in one possible implementation manner, the apparatus further includes a determination unit. And the processing unit is used for acquiring the vehicle brand of each reference vehicle in the plurality of reference vehicles if the number of the reference vehicles provided by the user is determined to be larger than 1. The determining unit is used for determining a target vehicle brand according to the vehicle brand of each reference vehicle in the plurality of reference vehicles, wherein the occupation ratio of the plurality of reference vehicles to the reference vehicle belonging to the target vehicle brand is equal to or larger than a preset occupation ratio. And the processing unit is used for acquiring the vehicle contour of each reference vehicle in the plurality of reference vehicles and determining the target vehicle contour corresponding to the plurality of reference vehicles according to the vehicle contour of each reference vehicle. A determination unit for determining that the reference vehicle is reference vehicle characteristic information according to the target vehicle brand and the target vehicle contour.
With reference to the second aspect, in one possible implementation manner, the apparatus includes: the processing unit is configured to divide the plurality of reference vehicles into N1 first reference vehicle sets according to the vehicle brand of each reference vehicle, where vehicle brands corresponding to the first reference vehicle sets in the N1 first reference vehicle sets are different, reference vehicles included in the first reference vehicle sets belong to the vehicle brands corresponding to the first reference vehicle sets, and N1 is a positive integer equal to or greater than 1. The determining unit is configured to determine N2 target reference vehicle sets from the N1 first reference vehicle sets according to the number of reference vehicles included in each first reference vehicle set, where an occupancy ratio of reference vehicles included in each target reference vehicle set in the N2 target reference vehicle sets is equal to or greater than a preset occupancy ratio, and N2 is a positive integer equal to or greater than 1. And the determining unit is used for determining the vehicle brand corresponding to each target reference vehicle set in the N2 target reference vehicle sets as the target vehicle brand.
With reference to the second aspect, in one possible implementation manner, the apparatus includes: and the processing unit is used for judging whether the similarity of the vehicle profiles of the two reference vehicles is greater than a first preset similarity or not if the multiple reference vehicles are determined to be two reference vehicles. And the determining unit is used for determining the vehicle contour of any one of the two reference vehicles as the target vehicle contour if the similarity of the vehicle contours of the two reference vehicles is determined to be equal to or greater than the first preset similarity. And the determining unit is used for determining the vehicle profiles of the two reference vehicles as the target vehicle profile if the similarity of the vehicle profiles of the two reference vehicles is determined to be smaller than the first preset similarity.
With reference to the second aspect, in one possible implementation manner, the apparatus includes: and the processing unit is used for judging whether the similarity of the vehicle profiles of any two of the three or more reference vehicles is smaller than the first preset similarity or not if the plurality of reference vehicles are determined to be three or more reference vehicles. The determining unit is used for determining the vehicle profiles of the three or more reference vehicles as target vehicle profiles if the similarity of the vehicle profiles of any two of the three or more reference vehicles is smaller than the first preset similarity. The determining unit is used for determining one or more second reference vehicle sets from the three or more reference vehicles according to the vehicle profiles of the three or more reference vehicles if the similarity of the vehicle profiles of any two of the three or more reference vehicles is determined not to be smaller than the first preset similarity, wherein the similarity of the vehicle profiles of any two of at least two reference vehicles included in the one or more second reference vehicle sets is equal to or larger than the first preset similarity. And the determining unit is used for determining the vehicle contour corresponding to each second reference vehicle set according to the vehicle contour of the reference vehicle contained in each second reference vehicle set in the one or more second reference vehicle sets, and determining the vehicle contour corresponding to each second reference vehicle set as the target vehicle contour.
With reference to the second aspect, in one possible implementation manner, the apparatus includes: a processing unit for performing the following determination operation on one or more second vehicles. And the acquisition unit is used for acquiring the vehicle brand and the vehicle profile of any second vehicle. And the processing unit is used for judging whether the vehicle brand and the vehicle outline of any second vehicle are matched with the reference vehicle characteristic information. A determination unit configured to determine any one of the second vehicles as a third vehicle if it is determined that the vehicle brand and the vehicle profile of the any one of the second vehicles match the reference vehicle feature information. A processing unit configured to perform the determination operation on the one or more second vehicles, respectively, to determine whether there is a third vehicle that matches the reference vehicle feature information in the one or more second vehicles.
With reference to the second aspect, in one possible implementation manner, the apparatus includes: and the determining unit is used for determining that the vehicle brand and the vehicle profile of any second vehicle are matched with the reference vehicle characteristic information if the vehicle brand of any second vehicle is determined to be the same as the target vehicle brand and the similarity between the vehicle profile of any second vehicle and the target vehicle profile is determined to be equal to or greater than a second preset similarity.
With reference to the second aspect, in one possible implementation manner, the apparatus includes: and the processing unit is used for realizing synchronous video recording of the third vehicle and the camera of the terminal equipment through the vehicle-mounted camera on the first vehicle. And the processing unit is used for fusing the video content of the third vehicle and the video content of the camera of the terminal equipment by using the timestamp as a mark to obtain target video content for the third vehicle.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, and when the computer program runs on a computer, the computer is enabled to execute the shooting method provided in any one of the possible implementation manners in the first aspect, so as to achieve the beneficial effects of the shooting method provided in the first aspect.
In a fourth aspect, embodiments of the present application provide an electronic device, which may include a processor and a memory, where the processor and the memory are connected to each other. The memory is used for storing a computer program, and the processor is configured to execute the computer program to implement the shooting method provided by the first aspect, so as to achieve the beneficial effects of the shooting method provided by the first aspect.
By implementing the embodiment of the invention, the automatic driving area controller can judge whether a third vehicle matched with the reference vehicle characteristic information exists around the first vehicle according to the reference vehicle characteristic information of the reference vehicle provided by the user and the current environment image of the first vehicle. Further, the autopilot domain controller may control a terminal device on the first vehicle to photograph the third vehicle. By the shooting method, the automatic driving area controller can identify and control the terminal equipment to shoot the third vehicle based on the requirements of the user in the driving mode, so that the driving experience and the safety of the user can be improved.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a shooting control method based on demand identification in a driving mode according to an embodiment of the present application;
fig. 2 is a schematic diagram of determining a target height of a smartphone holder according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a photographing control device based on demand recognition in a driving mode according to an embodiment of the present application;
FIG. 4 is a schematic view of another structure of a photographing control device based on demand recognition in a driving mode according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the foregoing drawings are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps is not limited to only those steps recited, but may alternatively include other steps not recited, or may alternatively include other steps inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In current intelligent automobile, when the driver wants to use the cell-phone to shoot the vehicle of interest and form the short video under driving environment, need take off the cell-phone from the cell phone stand and open the camera function and just can shoot, whole process is consuming time longer, may miss best shooting opportunity to there is certain potential safety hazard, thereby influence user experience. Therefore, the technical problem to be solved by the application is as follows: how to safely and high-quality photograph a vehicle in which a user is interested during driving.
In the embodiment of the application, the terminal device may be an intelligent terminal such as a smart phone, a tablet computer, and a wearable device, which can be used by a user for shooting. The autopilot domain controller bears the data processing and operation capacity required by autopilot, including but not limited to data processing of millimeter wave radar, camera, laser radar, GPS and other equipment, and also bears the safety of bottom layer core data and networking data under autopilot. The smart mobile phone support has the functions of receiving and sending commands, adjusting height and the like, and the implementation form of the smart mobile phone support is not particularly limited.
Please refer to fig. 1, where fig. 1 is a schematic flowchart of a shooting control method based on demand identification in a driving mode according to an embodiment of the present application, where the method may be applied to an automatic driving area controller on a first vehicle, the first vehicle is further provided with a smartphone cradle and a terminal device, and the automatic driving area controller is in communication connection with the smartphone cradle and the terminal device. It should be noted here that the first vehicle may be a vehicle in which the user is driving. As shown in fig. 1, the method may specifically include the following steps:
s101, if the terminal device is detected to be in a charging state and the driving mode of the terminal device is detected to be in an opening state, reference vehicle characteristic information of a reference vehicle provided by a user is acquired.
In some possible embodiments, the automatic driving area controller detects that the terminal device is in a charging state and the driving mode of the terminal device is in an on state, and the automatic driving area controller may obtain reference vehicle feature information of the reference vehicle provided by the user. It should be noted here that, in the embodiment of the present application, the reference vehicle may be a reference vehicle indicated by the user to be interested in and desired to be photographed, and the reference vehicle characteristic information may include a vehicle brand and a vehicle profile of the reference vehicle. Here, the reason for ensuring that the terminal device is in the charging state is to ensure that the terminal device has enough electric quantity to support the driving mode and the subsequent video recording function to be started.
It should also be noted here that the driving mode may be turned on or off by the terminal device according to the user instruction received by the terminal device. Specifically, the user and the terminal device can interact through a human-computer interaction interface, so that the driving mode is started or closed.
In a specific implementation, the automatic driving area controller may send inquiry information to the terminal device. Then, the automatic driving range controller may determine whether the terminal device is in a charging state and whether the driving mode of the terminal device is in an on state based on the feedback information of the terminal device with respect to the inquiry information.
Further, when the automatic driving area controller detects that the terminal device is in a charging state and the driving mode of the terminal device is in an on state, the automatic driving area controller may first obtain a reference vehicle provided by the user.
Alternatively, the autopilot domain controller may receive a user input in a terminal device or a human-machine interface on the vehicle indicating the relevant content of the reference vehicle. Further, the autopilot domain controller may determine a reference vehicle that is of interest to the user based on the relevant content for indicating the reference vehicle.
Optionally, the automatic driving domain controller may obtain daily search content and browsing records of the vehicle from the user based on user authorization. Further, the autonomous driving domain controller may analyze and determine a reference vehicle of interest to the user.
Optionally, the autonomous driving domain controller may provide a plurality of pre-selected reference vehicles at the human interface. The autopilot domain controller may then determine a reference vehicle of interest to the user based on the user's selection instructions. It should be noted that the pre-selected reference vehicle may be obtained by analyzing the search content and browsing records of the vehicle by the user on a daily basis based on the user authorization. The pre-selected reference vehicle may also be a vehicle that includes all brands.
Further, the automatic driving area controller may acquire reference vehicle characteristic information of the reference vehicle. It should be noted that the reference vehicle characteristic information may be information for characterizing and determining a certain reference vehicle or a certain type of reference vehicle. In the embodiment of the present application, the reference vehicle characteristic information may be a vehicle brand of the reference vehicle and a vehicle profile of the reference vehicle.
Optionally, the automatic driving area controller may directly obtain the vehicle brand of the reference vehicle according to information of a specific certain brand of vehicle selected by the user. Then, the automatic driving area controller can obtain an official vehicle picture corresponding to the reference vehicle according to the information of the reference vehicle. Further, the automatic driving area controller can identify and acquire the vehicle outline of the reference vehicle through technologies such as image identification and the like. Further, the automatic driving area controller may determine a vehicle brand and a vehicle profile corresponding to the reference vehicle as the reference vehicle characteristic information.
In an alternative embodiment, the autonomous driving domain controller may obtain the number of reference vehicles provided by the user. If the automatic driving area controller determines that the number of the reference vehicles provided by the user is 1, namely, one reference vehicle exists, the automatic driving area controller can acquire the vehicle brand and the vehicle profile of the reference vehicle. Further, the autopilot domain controller may determine a vehicle brand and a vehicle profile of the reference vehicle as the reference vehicle characteristic information of the reference vehicle.
In another alternative embodiment, the autonomous driving domain controller may obtain the number of reference vehicles provided by the user. If the automatic driving area controller determines that the number of the reference vehicles provided by the user is larger than 1, namely a plurality of reference vehicles exist, the automatic driving area controller can acquire the vehicle brands of the plurality of reference vehicles. The autonomous driving domain controller may then determine a target vehicle brand from the vehicle brands of each of the plurality of reference vehicles. It should be noted here that the proportion of the reference vehicles belonging to the brand of the target vehicle among the plurality of reference vehicles is equal to or greater than the preset proportion. The autonomous driving area controller may also obtain a vehicle profile of each of the plurality of reference vehicles, and the autonomous driving area controller may determine a target vehicle profile corresponding to the plurality of reference vehicles according to the vehicle profile of each of the reference vehicles. Further, the autopilot domain controller may determine the reference vehicle characteristic information based on the target vehicle brand and the target vehicle profile.
Optionally, the autonomous driving domain controller may divide the plurality of reference vehicles into N1 first reference vehicle sets according to the vehicle brand of each reference vehicle. Then, the automatic driving area controller may determine N2 target reference vehicle sets from the N1 first reference vehicle sets according to the number of the reference vehicles included in each first reference vehicle set. Further, the automatic driving area controller may determine, as the target vehicle brand, a vehicle brand corresponding to each target reference vehicle set of the N2 target reference vehicle sets. It should be noted here that, the vehicle brands corresponding to the respective first reference vehicle sets in the N1 first reference vehicle sets are different, the reference vehicles included in the respective first reference vehicle sets are the same as the vehicle brands corresponding to the respective first reference vehicle sets, and the proportion of the reference vehicles included in the respective target reference vehicle sets in the N2 target reference vehicle sets is equal to or greater than the preset proportion. Here, N1 and N2 are both positive integers equal to or greater than 1.
For example, it is assumed here that there are five reference vehicles, reference vehicle a, reference vehicle B, reference vehicle C, reference vehicle D, and reference vehicle E, whose vehicle brands correspond to vehicle brand 1, vehicle brand 2, vehicle brand 1, and vehicle brand 2, respectively. The preset proportion is assumed to be 50%. The autonomous driving domain controller may divide the five reference vehicles into 2 first reference vehicle sets, where N1 is 2, according to the vehicle brands of the five reference vehicles. Specifically, the first set of reference vehicles 1 includes a reference vehicle a, a reference vehicle C, and a reference vehicle D, and the second set of reference vehicles 2 includes a reference vehicle B and a reference vehicle E. The proportion of the reference vehicles included in the first reference vehicle set 1 is 60%, and the proportion of the reference vehicles included in the first reference vehicle set 2 is 40%. Since the percentage 60% of the reference vehicles included in the first reference vehicle set 1 is greater than the preset percentage 50% and the percentage 40% of the reference vehicles included in the first reference vehicle set 2 is less than the preset percentage 50%, the autonomous driving area controller may determine the vehicle brand 1 corresponding to the first reference vehicle set 1 as the target vehicle brand.
For another example, it is assumed here that there are four reference vehicles, reference vehicle a, reference vehicle B, reference vehicle C, and reference vehicle D, and the vehicle brands of these four reference vehicles correspond to vehicle brand 1, vehicle brand 2, and vehicle brand 1, respectively. The preset proportion is assumed to be 50%. The autonomous driving domain controller may divide the four reference vehicles into 2 first reference vehicle sets, where N1 is 2, according to the vehicle brands of the four reference vehicles. Specifically, the first set of reference vehicles 1 includes a reference vehicle a and a reference vehicle D, and the first set of reference vehicles 2 includes a reference vehicle B and a reference vehicle C. The percentage of the reference vehicles included in the first reference vehicle set 1 and the first reference vehicle set 2 is 50%. Since the percentage of the reference vehicles included in the first reference vehicle set 1 and the first reference vehicle set 2 is equal to the preset percentage 50%, the automatic driving area controller may determine, as the target vehicle brand, both the vehicle brand 1 corresponding to the first reference vehicle set 1 and the vehicle brand 2 corresponding to the first reference vehicle set 2.
Optionally, if the automatic driving area controller determines that the plurality of reference vehicles are two reference vehicles, it is further determined whether the similarity of the vehicle profiles of the two reference vehicles is greater than a first preset similarity. If the automatic driving domain controller determines that the similarity of the vehicle profiles of the two reference vehicles is equal to or greater than a first preset similarity, the vehicle profile of any one of the two reference vehicles can be determined as the target vehicle profile. If the automatic driving domain controller determines that the similarity of the vehicle profiles of the two reference vehicles is smaller than the first preset similarity, the vehicle profiles of the two reference vehicles can be determined as the target vehicle profile.
For example, it is assumed here that there are a reference vehicle a and a reference vehicle B, the vehicle profiles of which correspond to a vehicle profile 1 and a vehicle profile 2, respectively, and the similarity of the vehicle profile 1 and the vehicle profile 2 is 70%. Assume that the first predetermined similarity is 60%. The autonomous driving domain controller may determine the vehicle profile 1 of the reference vehicle a of the two reference vehicles as the target vehicle profile if the autonomous driving domain controller determines that the similarity of the vehicle profiles of the reference vehicle a and the reference vehicle B is 70% greater than the first preset similarity 60%.
For another example, it is assumed here that there are a reference vehicle a and a reference vehicle B, the vehicle profiles of which correspond to the vehicle profile 1 and the vehicle profile 2, respectively, and the similarity of the vehicle profile 1 and the vehicle profile 2 is 40%. Assume that the first predetermined similarity is 60%. The autonomous driving range controller may determine that the similarity 40% of the vehicle profiles of the reference vehicle a and the reference vehicle B is less than the first preset similarity 60%, and the autonomous driving range controller may determine both the vehicle profile 1 of the reference vehicle a and the vehicle profile 2 of the reference vehicle B as the target vehicle profile.
Optionally, if the automatic driving area controller determines that the plurality of reference vehicles are three or more reference vehicles, it is further determined whether the similarity of the vehicle profiles of any two reference vehicles in the three or more reference vehicles is smaller than a first preset similarity. If the automatic driving area controller determines that the similarity of the vehicle profiles of any two of the three or more reference vehicles is smaller than the first preset similarity, the vehicle profiles of the three or more reference vehicles can be determined as the target vehicle profile. If the automatic driving area controller determines that the similarity of the vehicle profiles of any two of the three or more reference vehicles is not less than the first preset similarity, one or more second reference vehicles can be determined from the three or more reference vehicles according to the vehicle profiles of the three or more reference vehicles. Further, the automatic driving area controller may determine, according to the vehicle contour of the reference vehicle included in each of the one or more second reference vehicle sets, a vehicle contour corresponding to each second reference vehicle set, and determine, as the target vehicle contour, the vehicle contour corresponding to each second reference vehicle set. Specifically, for any one of the one or more second reference vehicle sets, the autonomous driving domain controller may determine a vehicle contour of any one of the second reference vehicle sets as the target vehicle contour. It should be noted here that each of the one or more second reference vehicle sets includes at least two reference vehicles, and the similarity of the vehicle profiles of any two reference vehicles in the at least two reference vehicles included in each second reference vehicle set is equal to or greater than the first preset similarity.
For example, it is assumed here that there are a reference vehicle a, a reference vehicle B, and a reference vehicle C, the vehicle profiles of the three reference vehicles correspond to a vehicle profile 1, a vehicle profile 2, and a vehicle profile 3, respectively, the similarity of the vehicle profile 1 and the vehicle profile 2 is 40%, the similarity of the vehicle profile 1 and the vehicle profile 3 is 50%, and the similarity of the vehicle profile 2 and the vehicle profile 3 is 30%. Assume that the first predetermined similarity is 60%. The automatic driving area controller may determine that the similarity of the vehicle profiles of any two of the three reference vehicles is less than the first preset similarity 60%, and then the automatic driving area controller may determine the vehicle profile 1 of the reference vehicle a, the vehicle profile 2 of the reference vehicle B, and the vehicle profile 3 of the reference vehicle C as the target vehicle profile.
For another example, it is assumed here that there are a reference vehicle a, a reference vehicle B, and a reference vehicle C, the vehicle profiles of the three reference vehicles correspond to a vehicle profile 1, a vehicle profile 2, and a vehicle profile 3, respectively, the similarity of the vehicle profile 1 and the vehicle profile 2 is 80%, the similarity of the vehicle profile 1 and the vehicle profile 3 is 50%, and the similarity of the vehicle profile 2 and the vehicle profile 3 is 40%. Assume that the first predetermined similarity is 60%. The autopilot domain controller may determine that 80% of the similarity between the vehicle contour 1 and the vehicle contour 2 is greater than the first preset similarity 60%, and the autopilot domain controller may determine a second set of reference vehicles from the three reference vehicles based on the vehicle contours of the three reference vehicles, the second set of reference vehicles including the reference vehicle a and the reference vehicle B. Further, the autopilot domain controller may determine the vehicle profile 1 of the reference vehicle a of the second set of reference vehicles as the target vehicle profile.
S102, acquiring a current environment image of the first vehicle, which is acquired by a vehicle body area controller on the first vehicle through a vehicle-mounted camera.
In some possible embodiments, the automatic driving area controller may obtain a current environment image of the first vehicle, which is acquired by a body area controller on the first vehicle through an on-board camera. Here, the environment image may be an image of an object including a vehicle, a person, a tree, a building, and the like around the first vehicle, which is obtained by photographing the environment around the vehicle.
In specific implementation, the automatic driving area controller can start the vehicle-mounted camera to shoot the surrounding environment of the first vehicle through the vehicle body area controller on the first vehicle so as to obtain the current environment image of the first vehicle.
S103, if one or more second vehicles are determined to exist around the first vehicle according to the environment image, vehicle feature information of the one or more second vehicles is extracted, and whether a third vehicle matched with the reference vehicle feature information exists in the one or more second vehicles is determined according to the reference vehicle feature information and the vehicle feature information of the one or more second vehicles.
In some possible embodiments, if it is determined that one or more second vehicles are present around the first vehicle according to the environment image, the automatic driving domain controller may extract vehicle characteristic information of the one or more second vehicles, and determine whether a third vehicle matching the reference vehicle characteristic information is present in the one or more second vehicles according to the vehicle characteristic information of the reference vehicle and the vehicle characteristic information of the one or more second vehicles.
In a specific implementation, the automatic driving domain controller respectively executes a judgment operation on one or more second vehicles to determine whether a third vehicle matched with the reference vehicle characteristic information exists in the one or more second vehicles. The following takes any one of the one or more second vehicles as an example for explanation. The autopilot domain controller may obtain the vehicle brand and vehicle profile of any second vehicle. The autonomous driving domain controller may then determine whether the vehicle brand and the vehicle profile of the any of the second vehicles match the reference vehicle characteristic information.
Further, the automatic driving area controller may determine any second vehicle as a third vehicle if it is determined that the vehicle brand and the vehicle profile of the any second vehicle match the reference vehicle characteristic information. Specifically, if the automatic driving area controller determines that the vehicle brand of any one of the second vehicles is the same as the target vehicle brand, and determines that the similarity between the vehicle profile of any one of the second vehicles and the target vehicle profile is equal to or greater than a second preset similarity, it may be determined that the vehicle brand and the vehicle profile of any one of the second vehicles match the reference vehicle characteristic information.
For example, it is assumed here that the target vehicle brand included in the reference vehicle characteristic information of the reference vehicle is vehicle brand 1, and the target vehicle profile is vehicle profile 1. Assume that the second predetermined similarity is 60%. Assume that the environment image includes a second vehicle, the brand of which is vehicle brand 1, and the vehicle contour is vehicle contour 2. Assume that the similarity of the vehicle contour 1 and the vehicle contour 2 is 80%. The automatic driving area controller may acquire that the vehicle brand of the second vehicle included in the environment image is vehicle brand 1 and the vehicle contour is vehicle contour 2. Since the vehicle brand 1 of the third vehicle is identical to the vehicle brand 1 of the target vehicle brand, and the similarity 80% of the vehicle contour 2 of the third vehicle to the vehicle contour 1 of the target vehicle contour is greater than the second preset similarity 60%, the autonomous driving domain may determine that the vehicle brand and the vehicle contour of the second vehicle match the reference vehicle characteristic information.
And S104, if the third vehicle is determined to exist, determining whether the terminal equipment exists on the smart phone support or not.
In some possible implementations, if the autonomous driving domain controller determines that a third vehicle matching the reference vehicle characteristic information is present, it may be determined whether a terminal device is present on the smartphone cradle.
In a specific implementation, the autopilot domain controller may send query information to the smartphone cradle. Further, the autopilot domain controller may determine whether a terminal device is present on the smartphone cradle based on feedback information of the smartphone cradle for the inquiry information.
S105, if the fact that the terminal equipment exists on the smart phone support is determined, the target clamping height corresponding to the smart phone support is determined, a first instruction is sent to the smart phone support to enable the smart phone support to be adjusted to the target clamping height, a second instruction is sent to the terminal equipment to enable the terminal equipment to start a video recording function of the camera and record a video of a third vehicle, and the video recording is stopped until the third vehicle does not exist in a view finding range of the camera of the terminal equipment.
In some feasible embodiments, if the autopilot domain controller determines that the terminal device is present on the smartphone support, the target clamping height corresponding to the smartphone support may be determined, and a first instruction is sent to the smartphone support so that the smartphone support is adjusted to the target clamping height, and a second instruction is sent to the terminal device so that the terminal device starts a video recording function of the camera and records a third vehicle, and the video recording is stopped until the third vehicle is absent in a view range of the camera of the terminal device. It should be noted here that the first instruction includes a target clamping height. When the clamping height of the smart phone support is the target clamping height, the original view finding range of the terminal device comprises a third vehicle.
In an optional implementation manner, if the autopilot domain controller determines that the terminal device exists on the smartphone support, the autopilot domain controller may acquire the current clamping height of the smartphone support and the posture of the terminal device. Further, the autopilot domain controller may determine whether the terminal device is inverted based on the attitude of the terminal device. Then, if the autopilot domain controller determines that the terminal device is not inverted, the autopilot domain controller may determine the size of the terminal device and the position parameter of the camera at the terminal device according to a preset model of the terminal device.
Furthermore, the automatic driving area controller can determine that the original view range of the camera of the terminal device comprises the target clamping height of the third vehicle according to the current clamping height of the smart phone, the size of the terminal device, the position parameter of the camera at the terminal device, the pre-stored geometric model characteristic parameter between the base of the smart phone support and the engine cover of the first vehicle and the first relative position of the third vehicle and the base of the smart phone support. It should be noted here that the geometric model characteristic parameters between the smartphone holder base and the hood of the first vehicle may include a relative position, a relative height, a relative distance, and the like of the smartphone holder base and the hood holder of the first vehicle.
In specific implementation, the automatic driving area controller can determine a second relative position between the camera of the terminal device and the base of the smart phone support according to the current clamping height of the smart phone support, the size of the terminal device and the position parameters of the camera at the terminal device. Then, the automatic driving area controller can determine that the original viewing range of the terminal device contains a first clamping height range of the third vehicle and a viewing range corresponding to the first clamping height range according to the second relative position, the first relative position of the third vehicle and the smart phone support base, the travel of the smart phone support and the shooting parameters of the camera of the terminal device.
Further, the automatic driving area controller can determine a second clamping height range, of which the actual framing effect cannot be shielded by the engine hood of the first vehicle, according to the prestored geometric model characteristic parameters between the smart phone support base and the engine hood of the first vehicle and the framing range corresponding to the first clamping height range. It should be noted that the smartphone holder stroke refers to a range of the clamping height that can be adjusted by the smartphone holder. The original view range of the terminal device refers to a view range that can be shot by a camera when the position of the terminal device is kept unchanged after a video/shooting function of the camera of the terminal device is started. Specifically, the autopilot domain controller may determine a set of relationships between the smartphone holder clamping height and the camera viewing range of the terminal device according to the second relative position, the smartphone holder travel, and the shooting parameters of the camera of the terminal device. Further, the automatic driving area controller may determine, according to the relationship set and the first relative position, that the original viewing range of the camera of the terminal device includes the first clamping height range of the third vehicle and the viewing range corresponding to the first clamping height range. It should be noted here that the shooting parameters of the camera of the terminal device may include a pixel value, an aperture size, and the like of the camera of the terminal device.
The position parameter of the camera on the terminal device is a parameter used for representing a specific position of the camera on the terminal device, and may include orientation information and/or size information of the camera. For example, the position parameter of the camera at the terminal device may include orientation information, which is that the camera is at the upper left corner of the terminal device. For another example, the position parameters of the camera at the terminal device may include orientation parameters and size information, the orientation information is that the camera is at the upper left corner of the terminal device, and the size information is that the distance from the camera to the frame of the mobile phone is 5mm.
Further, the autopilot domain controller may select a minimum value of the second range of clamping heights as the target clamping height. Then, the autopilot domain controller can send a first instruction to the smartphone holder to cause the smartphone holder to adjust to the target clamping height. And the automatic driving domain controller can send a second instruction to the terminal equipment so that the terminal equipment can be used for clamping the video recording function of the camera of the terminal equipment and recording a third vehicle, and the video recording is stopped until the third vehicle does not exist in the view finding range of the camera of the terminal equipment.
Alternatively, if the autonomous driving area controller determines that the third vehicle exists, the autonomous driving area controller may obtain a first height of a hood of the first vehicle relative to a chassis of the first vehicle and a second height of a camera of a photographing device on the first vehicle relative to the chassis of the first vehicle. Here, it should be noted that, preferably, the first height of the hood of the first vehicle relative to the chassis of the first vehicle refers to a maximum height of the hood of the first vehicle and the chassis of the first vehicle. The autopilot domain controller can then determine a target height of a smartphone cradle for placement of the terminal device relative to a chassis of the first vehicle based on the first height and the second height. Further, the autopilot domain controller may adjust the smartphone holder to a target height. Then, the automatic driving area controller may start a video recording function of a camera of the terminal device to photograph the third vehicle. It should be noted here that the target height of the smartphone holder relative to the chassis of the first vehicle may be greater than the first height, or may be smaller than the first height.
For example, please refer to fig. 2, fig. 2 is a schematic diagram illustrating a method for determining a target height of a smartphone holder according to an embodiment of the present application. As shown in fig. 2, it is assumed here that a first height of the hood of the first vehicle with respect to the chassis of the first vehicle is h1, and a second height of the camera of the terminal device on the first vehicle with respect to the chassis of the first vehicle is h2. The automatic driving area controller can determine that the target height of a smart phone support for placing the terminal equipment relative to the chassis of the first vehicle is h0 according to the first height h1 and the second height h2. Then, the autopilot domain controller may adjust the smartphone mount to a target height h0. Further, the automatic driving area controller may start a video recording function of a camera of the terminal device to shoot the third vehicle.
S106, if it is determined that the terminal equipment does not exist on the smart phone support, generating and outputting reminding information; determining whether the terminal equipment exists on the smart phone support again; and if the fact that the terminal equipment does not exist on the smart phone support is determined again, the current system state is kept.
In some possible implementations, the autopilot domain controller may generate and output a reminder message if it is determined that no terminal device is present on the smartphone mount. After the autopilot domain controller generates and outputs the reminder information, the autopilot domain controller can determine again whether the terminal device is present on the smartphone mount. And after the automatic driving area controller determines whether the terminal equipment exists on the smart phone support again, if the terminal equipment does not exist on the smart phone support, the current system state is kept. It should be noted here that the reminding information is used for reminding the user of placing the terminal device on the smartphone holder.
Optionally, if the autopilot domain controller determines that the terminal device does not exist on the smartphone support, the autopilot domain controller may control a human-computer interaction interface on the vehicle to generate and output the prompting message. For example, if the autopilot domain controller determines that the terminal device does not exist on the smartphone support, the autopilot domain controller may control a human-computer interaction interface on the vehicle to generate and output a word "please place the terminal device to the smartphone support" to remind the user to place the terminal device on the smartphone support.
In the implementation, the automatic driving area controller may determine whether a third vehicle matching the reference feature information exists around the first vehicle according to the reference vehicle feature information of the reference vehicle provided by the user and the current environment image of the first vehicle. Further, the autopilot domain controller may control a terminal device on the first vehicle to photograph the third vehicle. The automatic driving domain controller can identify and control the terminal equipment to shoot the third vehicle based on the requirements of the user in the driving mode, so that the driving experience and the safety of the user can be improved. In some possible embodiments, the automatic driving area controller may implement synchronous video recording with the camera of the terminal device for the third vehicle through the vehicle-mounted camera on the first vehicle. Further, the automatic driving domain controller may obtain the target video content for the third vehicle according to the fusion of the video content of the third vehicle by the vehicle-mounted camera and the video content of the camera of the terminal device by using the timestamp as a marker.
Optionally, the automatic driving area controller may obtain a current environment video of the first vehicle through the vehicle body area controller. Further, the automatic driving area controller can splice a shot video obtained by shooting the third vehicle through the terminal device with the current environment video of the first vehicle according to the time stamp of the shot video so as to obtain the videos shot together under the same time stamp.
For example, the automatic driving area controller may obtain a current environment video of the first vehicle through the body area controller, and assume that the environment video is recorded with an environment video of 10 hours, 15 minutes and 00 seconds to 10 hours, 16 minutes and 00 seconds. Assume that a captured video of the third vehicle captured by the automatic driving range controller is recorded with a captured video of 10 hours 15 minutes 15 seconds to 10 hours 15 minutes 30 seconds. Further, the automatic driving area controller can obtain a shot video recorded with 10 hours and 15 minutes and 05 seconds to 10 hours and 15 minutes and 30 seconds according to the time node of the shot video and by combining the environmental video.
Optionally, the automatic driving area controller may obtain a current environment image of the first vehicle through the vehicle body area controller. Then, the automatic driving area controller may acquire an image corresponding to the time stamp in the environment image based on the time stamp of the captured image obtained by capturing the third vehicle. Further, the automatic driving domain controller can acquire the shot image of the time stamp and the environment image to a shot image with better image quality through technologies such as image enhancement.
In the implementation, the automatic driving area controller can process the shot image shot by the third vehicle and the current environment image of the first vehicle obtained through the vehicle body area controller, so that the shot image with more complete shooting record and better image quality of the third vehicle can be obtained.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a photographing control device based on demand recognition in a driving mode according to an embodiment of the present application. As shown in fig. 3, the photographing control device based on demand recognition in the driving mode may include: an acquisition unit 31, a processing unit 32 and a determination unit 33.
In a specific implementation, the obtaining unit 31 is configured to obtain the vehicle characteristic information of the reference vehicle provided by the user if it is detected that the terminal device is in the charging state and the driving mode of the terminal device is in the on state. The acquiring unit 31 is configured to acquire a current environment image of the first vehicle, which is acquired by a vehicle body area controller on the first vehicle through a vehicle-mounted camera. And the processing unit 32 is used for extracting the vehicle characteristic information of the one or more second vehicles if the one or more second vehicles are determined to be arranged around the first vehicle according to the environment image, and determining whether a third vehicle matched with the reference vehicle characteristic information exists in the one or more second vehicles according to the vehicle characteristic information of the reference vehicle and the vehicle characteristic information of the one or more second vehicles. And a determining unit 33, configured to determine whether a terminal device exists on the smartphone holder if it is determined that the third vehicle exists. The processing unit 32 is configured to determine a target clamping height corresponding to the smartphone support and send a first instruction to the smartphone support so that the smartphone support is adjusted to the target clamping height, and send a second instruction to the terminal device so that the terminal device starts a video recording function of the camera and records a third vehicle, and stop recording the third vehicle until the third vehicle does not exist in a viewing range of the camera of the terminal device, where the first instruction includes the target clamping height, and when the clamping height of the smartphone support is the target clamping height, an original viewing range of the camera of the terminal device includes the third vehicle. The processing unit 32 is configured to generate and output a reminding message if it is determined that the terminal device does not exist on the smartphone support; determining whether the terminal equipment exists on the smart phone support again; and if the fact that the terminal equipment does not exist on the smart phone support is determined again, the current system state is kept, wherein the reminding information is used for reminding a user of placing the terminal equipment on the smart phone support.
In an alternative embodiment, the obtaining unit 31 is configured to obtain a current clamping height of the smartphone holder and a posture of the terminal device. A determining unit 33, configured to determine whether the terminal device is inverted according to the posture of the terminal device. And the processing unit 32 is configured to determine, if it is determined that the terminal device is not inverted, a size of the terminal device and a position parameter of a camera of the terminal device on the terminal device according to a preset model of the terminal device. And the processing unit 32 is configured to determine a target clamping height corresponding to the smartphone support according to the current clamping height of the smartphone support, the size of the terminal device, the position parameter of the camera at the terminal device, a pre-stored geometric model characteristic parameter between the smartphone support base and the hood of the first vehicle, and a first relative position of the third vehicle and the smartphone support base.
In an alternative embodiment, please refer to fig. 4, where fig. 4 is a schematic structural diagram of a photographing control device based on demand recognition in a driving mode according to an embodiment of the present application. As shown in fig. 4, the photographing control device based on demand recognition in the driving mode may further include a selecting unit 34. And the processing unit 32 is configured to determine a second relative position between the camera of the terminal device and the base of the smartphone holder according to the current clamping height of the smartphone holder, the size of the terminal device, and a position parameter of the camera of the terminal device on the terminal device. And the processing unit 32 is configured to determine that the original viewing range of the camera of the terminal device includes the first clamping height range of the third vehicle and the viewing range corresponding to the first clamping height range according to the second relative position, the first relative position of the third vehicle and the smartphone holder base, the stroke of the smartphone holder, and the shooting parameters of the camera of the terminal device. And the processing unit 32 is configured to determine a second clamping height range, in which an actual viewing effect cannot be blocked by the hood of the first vehicle, according to a pre-stored geometric model characteristic parameter between the smartphone holder base and the hood of the first vehicle and a viewing range corresponding to the first clamping height range. And the selecting unit 34 is used for selecting the minimum value in the second clamping height range as the target clamping height.
In an alternative embodiment, the processing unit 32 is configured to determine a set of relationships between the smartphone holder clamping height and the viewing range of the camera of the terminal device according to the second relative position, the stroke of the smartphone holder, and the shooting parameters of the camera of the terminal device. And the processing unit 32 is used for determining that the original view range of the camera of the terminal device contains the first clamping height range of the third vehicle and the view range corresponding to the first clamping height range according to the relationship set and the first relative position.
In an alternative embodiment, the obtaining unit 31 is configured to obtain the number of reference vehicles provided by the user. And the processing unit 32 is used for acquiring the vehicle brand and the vehicle contour of the reference vehicle and determining the vehicle brand and the vehicle contour of the reference vehicle as the reference vehicle characteristic information of the reference vehicle if the number of the reference vehicles provided by the user is determined to be 1.
In an alternative embodiment, the processing unit 32 is configured to obtain a vehicle brand of each of the plurality of reference vehicles if it is determined that the number of reference vehicles provided by the user is greater than 1. The determining unit 33 is configured to determine a target vehicle brand according to a vehicle brand of each of a plurality of reference vehicles, where an occupancy ratio of the plurality of reference vehicles to a reference vehicle belonging to the target vehicle brand is equal to or greater than a preset occupancy ratio. And the processing unit 32 is used for acquiring the vehicle contour of each reference vehicle in the plurality of reference vehicles and determining the target vehicle contour corresponding to the plurality of reference vehicles according to the vehicle contour of each reference vehicle. A determination unit 33 for determining the reference vehicle characteristic information according to the target vehicle brand and the target vehicle contour.
In an optional embodiment, the processing unit 32 is configured to divide the plurality of reference vehicles into N1 first reference vehicle sets according to vehicle brands of the reference vehicles, where vehicle brands corresponding to the first reference vehicle sets in the N1 first reference vehicle sets are different, reference vehicles included in the first reference vehicle sets are the same as vehicle brands corresponding to the first reference vehicle sets, and N1 is a positive integer equal to or greater than 1. The determining unit 33 is configured to determine N2 target reference vehicle sets from the N1 first reference vehicle sets according to the number of reference vehicles included in each first reference vehicle set, where an occupancy ratio of reference vehicles included in each target reference vehicle set in the N2 target reference vehicle sets is equal to or greater than a preset occupancy ratio, and N2 is a positive integer equal to or greater than 1. The determining unit 33 is configured to determine, as the target vehicle brand, a vehicle brand corresponding to each target reference vehicle set of the N2 target reference vehicle sets.
In an alternative embodiment, the processing unit 32 is configured to determine whether the similarity of the vehicle profiles of the two reference vehicles is greater than a first preset similarity if the plurality of reference vehicles are determined to be two reference vehicles. The determining unit 33 is configured to determine the vehicle contour of any one of the two reference vehicles as the target vehicle contour if it is determined that the similarity of the vehicle contours of the two reference vehicles is equal to or greater than a first preset similarity. The determining unit 33 determines the vehicle contours of the two reference vehicles as the target vehicle contour if it is determined that the similarity of the vehicle contours of the two reference vehicles is smaller than the first preset similarity.
In an alternative embodiment, the processing unit 32 is configured to determine whether the similarity of the vehicle profiles of any two reference vehicles of the three or more reference vehicles is less than a first preset similarity if the plurality of reference vehicles is determined to be three or more reference vehicles. The determining unit 33 is configured to determine the vehicle profiles of any two of the three or more reference vehicles as the target vehicle profile if it is determined that the vehicle profiles of the three or more reference vehicles are all less than the first preset similarity. The determining unit 33 is configured to determine one or more second reference vehicle sets from the three or more reference vehicles according to the vehicle profiles of the three or more reference vehicles if it is determined that the similarity of the vehicle profiles of any two reference vehicles of the three or more reference vehicles is not both smaller than the first preset similarity, where the similarity of the vehicle profiles of any two reference vehicles of the at least two reference vehicles included in the one or more second reference vehicle sets is equal to or greater than the first preset similarity. The determining unit 33 is configured to determine, according to the vehicle contour of the reference vehicle included in each of the one or more second reference vehicle sets, a vehicle contour corresponding to each of the second reference vehicle sets, and determine the vehicle contour corresponding to each of the second reference vehicle sets as the target vehicle contour.
In an alternative embodiment, the processing unit 32 is configured to perform the following determination operation for any of the one or more second vehicles. The acquiring unit 31 is configured to acquire a vehicle brand and a vehicle profile of any second vehicle. And the processing unit 32 is used for judging whether the vehicle brand and the vehicle outline of any second vehicle are matched with the reference vehicle characteristic information. And a determination unit 33, configured to determine any second vehicle as a third vehicle if it is determined that the vehicle brand and the vehicle profile of any second vehicle match the reference vehicle feature information. A processing unit configured to perform the determination operation on the one or more second vehicles, respectively, to determine whether there is a third vehicle that matches the reference vehicle feature information in the one or more second vehicles.
In an alternative embodiment, the determining unit 33 is configured to determine that the vehicle brand and the vehicle contour of any second vehicle match the reference vehicle characteristic information if it is determined that the vehicle brand of any second vehicle is the same as the target vehicle brand and it is determined that the similarity between the vehicle contour of any second vehicle and the target vehicle contour is equal to or greater than a second preset similarity.
In an alternative embodiment, the processing unit 32 is configured to implement synchronous video recording with a camera of the terminal device for a third vehicle through an onboard camera of the first vehicle. And the processing unit 32 is used for obtaining target video content for the third vehicle according to the video content of the third vehicle recorded by the vehicle-mounted camera and the video content of the camera of the terminal equipment by fusing with the timestamp as a mark.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device may be the automatic driving domain controller in the above embodiment, and may be configured to implement the steps of the photographing control method based on demand recognition in the driving mode executed by the automatic driving domain controller described in the above embodiment. The electronic device may include: a processor 51, a memory 52 and a bus system 53.
Memory 52 includes, but is not limited to, RAM, ROM, EPROM or CD-ROM, and memory 52 is used to store the relevant instructions and data. The memory 52 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof:
and (3) operating instructions: including various operational instructions for performing various operations.
Operating the system: including various system programs for implementing various basic services and for handling hardware-based tasks.
Only one memory is shown in fig. 5, but of course, the memory may be provided in plural numbers as necessary.
As shown in fig. 5, the electronic device may further include an input/output device 54, and the input/output device 54 may be a communication module or a transceiver circuit. In the embodiment of the present application, the input/output device 54 is used to perform the transceiving process of data or signaling, such as the reference vehicle characteristic information, referred to in the embodiment.
The processor 51 may be a controller, CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure of the embodiments of the application. The processor 51 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
In a particular application, the various components of the electronic device are coupled together by a bus system 53, wherein the bus system 53 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 53 in fig. 5. For ease of illustration, it is drawn only schematically in fig. 5.
It should be noted that, in practical applications, the processor in the embodiment of the present application may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed.
It will be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memories described in the embodiments of the present application are intended to comprise, without being limited to, these and any other suitable types of memories.
The embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a computer, implements the method or steps performed by the automatic driving domain controller in the above embodiments.
Embodiments of the present application further provide a computer program product, where the computer program product, when executed by a computer, implements the method or steps performed by the automatic driving domain controller in the foregoing embodiments.
It should be noted that, for the sake of simplicity, any of the above-mentioned embodiments of the shooting method are described as a series of combinations of actions, but those skilled in the art should understand that the present application is not limited by the described order of actions, because some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art will recognize that the embodiments described in this specification are preferred embodiments and that no acts are necessarily required to achieve the ends of this application.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Those skilled in the art will appreciate that all or part of the steps in the various methods of the method embodiments of any of the above-described photographing methods may be implemented by instructing associated hardware by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present application are described in detail, and the specific examples are applied herein to explain the principle and implementation of the method and apparatus for controlling shooting based on demand recognition in a driving mode of the present application, and the description of the above embodiments indicates the method and core idea for helping understanding the present application; meanwhile, for those skilled in the art, according to the idea of the photographing control method and apparatus based on demand recognition in a driving mode of the present application, the specific implementation and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Those skilled in the art will recognize that in one or more of the examples described above, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present application should be included in the scope of the present application.

Claims (10)

1. A shooting control method based on demand recognition in a driving mode is characterized in that the method is applied to an automatic driving area controller on a first vehicle, a smart phone support and a terminal device are further arranged on the first vehicle, and the automatic driving area controller is in communication connection with the smart phone support and the terminal device, and the method comprises the following steps:
if the terminal equipment is detected to be in a charging state and the driving mode of the terminal equipment is detected to be in an opening state, acquiring reference vehicle characteristic information of a reference vehicle provided by a user;
acquiring a current environment image of the first vehicle, which is acquired by a vehicle body area controller on the first vehicle through a vehicle-mounted camera;
if one or more second vehicles are determined to be in the periphery of the first vehicle according to the environment image, extracting vehicle characteristic information of the one or more second vehicles, and determining whether a third vehicle matched with the reference vehicle characteristic information exists in the one or more second vehicles according to the reference vehicle characteristic information and the vehicle characteristic information of the one or more second vehicles;
if the third vehicle is determined to exist, determining whether the terminal equipment exists on the smart phone support or not;
if the terminal equipment is determined to exist on the smart phone support, determining a target clamping height corresponding to the smart phone support, and sending a first instruction to the smart phone support to enable the smart phone support to be adjusted to the target clamping height, wherein the first instruction comprises the target clamping height, and when the clamping height of the smart phone support is the target clamping height, an original view finding range of the terminal equipment comprises the third vehicle;
sending a second instruction to the terminal equipment to enable the terminal equipment to start a video recording function of a camera and record a video of the third vehicle, and stopping the video recording until the third vehicle does not exist in a view finding range of the camera of the terminal equipment;
if the terminal equipment is determined not to exist on the smart phone support, generating and outputting reminding information; determining whether the terminal equipment exists on the smart phone support again; and if the fact that the terminal equipment does not exist on the smart phone support is determined again, keeping the current system state, wherein the reminding information is used for reminding the user of placing the terminal equipment on the smart phone support.
2. The method of claim 1, wherein the determining the target clamping height corresponding to the smartphone holder comprises:
acquiring the current clamping height of the smart phone support and the posture of the terminal equipment;
determining whether the terminal equipment is inverted or not according to the posture of the terminal equipment;
if the terminal equipment is determined not to be inverted, determining the size of the terminal equipment and the position parameters of a camera of the terminal equipment on the terminal equipment according to a preset model of the terminal equipment;
and determining a target clamping height corresponding to the smart phone support according to the current clamping height of the smart phone support, the size of the terminal equipment, the position parameter of a camera of the terminal equipment on the terminal equipment, the pre-stored geometric model characteristic parameter between the base of the smart phone support and the engine hood of the first vehicle and the first relative position of the third vehicle and the base of the smart phone support.
3. The method of claim 2, wherein the determining a target clamping height corresponding to the smartphone holder according to the current clamping height of the smartphone holder, the size of the terminal device, the position parameter of the camera of the terminal device on the terminal device, the pre-stored geometric model characteristic parameter between the smartphone holder base and the hood of the first vehicle, and the first relative position of the third vehicle and the smartphone holder base comprises:
determining a second relative position between the camera of the terminal equipment and the base of the smart phone support according to the current clamping height of the smart phone support, the size of the terminal equipment and the position parameters of the camera of the terminal equipment on the terminal equipment;
determining that an original viewing range of a camera of the terminal device comprises a first clamping height range of the third vehicle and a viewing range corresponding to the first clamping height range according to the second relative position, the first relative position of the third vehicle and the base of the smartphone support, the stroke of the smartphone support and shooting parameters of the camera of the terminal device;
determining a second clamping height range, of which the actual framing effect cannot be shielded by the engine hood of the first vehicle, according to pre-stored geometric model characteristic parameters between the base of the smart phone support and the engine hood of the first vehicle and a framing range corresponding to the first clamping height range;
and selecting the minimum value in the second clamping height range as a target clamping height.
4. The method according to claim 3, wherein the determining that the original viewing range of the camera of the terminal device includes the first clamping height range of the third vehicle and the corresponding viewing range of the first clamping height range according to the second relative position, the first relative position of the third vehicle and the smartphone holder base, the travel of the smartphone holder, and the shooting parameters of the camera of the terminal device comprises:
determining a relation set between the clamping height of the smart phone support and a camera view range of the terminal equipment according to the second relative position, the stroke of the smart phone support and shooting parameters of the camera of the terminal equipment;
and determining that the original viewing range of the camera of the terminal equipment comprises a first clamping height range of the third vehicle and a viewing range corresponding to the first clamping height range according to the relation set and the first relative position.
5. The method according to any one of claims 1-4, wherein the obtaining reference vehicle characteristic information of the reference vehicle provided by the user comprises:
acquiring the number of reference vehicles provided by a user;
if the number of the reference vehicles provided by the user is determined to be 1, acquiring the vehicle brand and the vehicle profile of the reference vehicle, and determining the vehicle brand and the vehicle profile of the reference vehicle as the reference vehicle characteristic information of the reference vehicle.
6. The method of claim 5, further comprising:
if the number of the reference vehicles provided by the user is determined to be larger than 1, acquiring the vehicle brand of each reference vehicle in a plurality of reference vehicles;
determining a target vehicle brand according to the vehicle brand of each of the plurality of reference vehicles, wherein the proportion of reference vehicles belonging to the target vehicle brand in the plurality of reference vehicles is equal to or greater than a preset proportion;
obtaining a vehicle contour of each reference vehicle in the plurality of reference vehicles, and determining a target vehicle contour corresponding to the plurality of reference vehicles according to the vehicle contour of each reference vehicle;
and determining reference vehicle characteristic information of the reference vehicle according to the brand of the target vehicle and the contour of the target vehicle.
7. The method of claim 6, wherein determining a target vehicle brand from the vehicle brands of each of the plurality of reference vehicles comprises:
dividing the plurality of reference vehicles into N1 first reference vehicle sets according to the vehicle brands of the reference vehicles, wherein the vehicle brands corresponding to the first reference vehicle sets in the N1 first reference vehicle sets are different, the reference vehicles included in the first reference vehicle sets belong to the vehicle brands corresponding to the first reference vehicle sets, and N1 is a positive integer equal to or greater than 1;
determining N2 target reference vehicle sets from the N1 first reference vehicle sets according to the number of reference vehicles contained in each first reference vehicle set, wherein the proportion of reference vehicles contained in each target reference vehicle set in the N2 target reference vehicle sets is equal to or greater than a preset proportion, and N2 is a positive integer equal to or greater than 1;
and determining the vehicle brand corresponding to each target reference vehicle set in the N2 target reference vehicle sets as a target vehicle brand.
8. A photographic control device based on demand recognition in a driving mode, characterized by comprising:
the device comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring reference vehicle characteristic information of a reference vehicle provided by a user if the terminal device is detected to be in a charging state and a driving mode of the terminal device is detected to be in an opening state;
the acquisition unit is used for acquiring a current environment image of the first vehicle, which is acquired by a vehicle body area controller on the first vehicle through a vehicle-mounted camera;
a processing unit, configured to, if it is determined that one or more second vehicles are present around the first vehicle according to the environment image, extract vehicle feature information of the one or more second vehicles, and determine whether a third vehicle that matches the reference vehicle feature information is present in the one or more second vehicles according to the reference vehicle feature information and the vehicle feature information of the one or more second vehicles;
the determining unit is used for determining whether the terminal equipment exists on the smart phone support or not if the third vehicle is determined to exist;
the processing unit is configured to determine a target clamping height corresponding to the smartphone support and send a first instruction to the smartphone support so that the smartphone support is adjusted to the target clamping height, and send a second instruction to the terminal device so that the terminal device starts a video recording function of a camera and records a third vehicle until the third vehicle does not exist in a viewing range of the camera of the terminal device, where the first instruction includes the target clamping height, and when the clamping height of the smartphone support is the target clamping height, an original viewing range of the camera of the terminal device includes the third vehicle;
the processing unit is used for generating and outputting reminding information if the terminal equipment does not exist on the smart phone support; determining whether the terminal equipment exists on the smart phone support again; and if the fact that the terminal equipment does not exist on the smart phone support is determined again, keeping the current system state, wherein the reminding information is used for reminding the user of placing the terminal equipment on the smart phone support.
9. A computer-readable storage medium for storing a computer program which, when executed by a processor, performs the steps of the method of any one of claims 1 to 7.
10. An electronic device, comprising a memory storing a computer program and a processor implementing the steps of the method of any of claims 1 to 7 when the processor executes the computer program.
CN202211720089.XA 2022-12-30 2022-12-30 Shooting control method and device based on demand identification in driving mode Active CN115841763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211720089.XA CN115841763B (en) 2022-12-30 2022-12-30 Shooting control method and device based on demand identification in driving mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211720089.XA CN115841763B (en) 2022-12-30 2022-12-30 Shooting control method and device based on demand identification in driving mode

Publications (2)

Publication Number Publication Date
CN115841763A true CN115841763A (en) 2023-03-24
CN115841763B CN115841763B (en) 2023-10-27

Family

ID=85577652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211720089.XA Active CN115841763B (en) 2022-12-30 2022-12-30 Shooting control method and device based on demand identification in driving mode

Country Status (1)

Country Link
CN (1) CN115841763B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001257920A (en) * 2000-03-13 2001-09-21 Fuji Photo Film Co Ltd Camera system
CN108024049A (en) * 2016-10-31 2018-05-11 惠州华阳通用电子有限公司 A kind of vehicle-mounted shooting device towards control method and device
US20180176457A1 (en) * 2016-12-15 2018-06-21 Motorola Solutions, Inc System and method for identifying a person, object, or entity (poe) of interest outside of a moving vehicle
CN111277755A (en) * 2020-02-12 2020-06-12 广州小鹏汽车科技有限公司 Photographing control method and system and vehicle
CN214929500U (en) * 2021-04-20 2021-11-30 北京汽车集团越野车有限公司 Automobile with a detachable front cover
CN114760417A (en) * 2022-04-25 2022-07-15 北京地平线信息技术有限公司 Image shooting method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001257920A (en) * 2000-03-13 2001-09-21 Fuji Photo Film Co Ltd Camera system
CN108024049A (en) * 2016-10-31 2018-05-11 惠州华阳通用电子有限公司 A kind of vehicle-mounted shooting device towards control method and device
US20180176457A1 (en) * 2016-12-15 2018-06-21 Motorola Solutions, Inc System and method for identifying a person, object, or entity (poe) of interest outside of a moving vehicle
CN111277755A (en) * 2020-02-12 2020-06-12 广州小鹏汽车科技有限公司 Photographing control method and system and vehicle
CN214929500U (en) * 2021-04-20 2021-11-30 北京汽车集团越野车有限公司 Automobile with a detachable front cover
CN114760417A (en) * 2022-04-25 2022-07-15 北京地平线信息技术有限公司 Image shooting method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115841763B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
KR102418446B1 (en) Picture-based vehicle damage assessment method and apparatus, and electronic device
US10013823B2 (en) Vehicle information processing system and method
KR20190060817A (en) Image based vehicle damage determination method and apparatus, and electronic device
TW202011254A (en) Auxiliary method for capturing damage assessment image of vehicle, device, and apparatus
US11087138B2 (en) Vehicle damage assessment method, apparatus, and device
KR102241906B1 (en) System and method for guiding parking location of a vehicle
CN112363767A (en) Vehicle-mounted camera calling method and device
CN114913506A (en) 3D target detection method and device based on multi-view fusion
CN104392611A (en) Method and system for identifying high-price cars
CN114906049B (en) Automobile rear-row reading lamp control method and device and vehicle
CN110956716A (en) Vehicle-based image acquisition method, transmission method, device, vehicle, system and medium
JP2021510428A (en) Parking guidance system and its control method
CN115841763A (en) Shooting control method and device based on demand recognition in driving mode
CN111667602A (en) Image sharing method and system for automobile data recorder
CN116080464A (en) Vehicle charging method and device based on charging order
CN110933314A (en) Focus-following shooting method and related product
CN115118936B (en) Remote checking method and device for vehicle
CN115633138A (en) Vehicle-mounted group photo method, device, storage medium and equipment
CN111796754B (en) Method and device for providing electronic books
CN114286004A (en) Focusing method, shooting device, electronic equipment and medium
CN114492492A (en) Two-dimensional code scanning method and device, storage medium and electronic equipment
CN112319372A (en) Image display method and device based on streaming media rearview mirror
EP3089100A1 (en) Method, apparatus, and system for displaying use records
CN113507559A (en) Intelligent camera shooting method and system applied to vehicle and vehicle
CN115426385B (en) Method, device, equipment and medium for acquiring information outside vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant