CN112061075A - Scene triggering method, device, equipment and storage medium - Google Patents

Scene triggering method, device, equipment and storage medium Download PDF

Info

Publication number
CN112061075A
CN112061075A CN202010931368.5A CN202010931368A CN112061075A CN 112061075 A CN112061075 A CN 112061075A CN 202010931368 A CN202010931368 A CN 202010931368A CN 112061075 A CN112061075 A CN 112061075A
Authority
CN
China
Prior art keywords
scene
triggering
module
vehicle
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010931368.5A
Other languages
Chinese (zh)
Other versions
CN112061075B (en
Inventor
丁磊
郑洲
王昶旭
赵叶霖
卢加浚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Human Horizons Shanghai Internet Technology Co Ltd
Original Assignee
Human Horizons Shanghai Internet Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Human Horizons Shanghai Internet Technology Co Ltd filed Critical Human Horizons Shanghai Internet Technology Co Ltd
Priority to CN202010931368.5A priority Critical patent/CN112061075B/en
Publication of CN112061075A publication Critical patent/CN112061075A/en
Application granted granted Critical
Publication of CN112061075B publication Critical patent/CN112061075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/10Fittings or systems for preventing or indicating unauthorised use or theft of vehicles actuating a signalling device
    • B60R25/102Fittings or systems for preventing or indicating unauthorised use or theft of vehicles actuating a signalling device a signal being sent to a remote location, e.g. a radio signal being transmitted to a police station, a security company or the owner
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/10Fittings or systems for preventing or indicating unauthorised use or theft of vehicles actuating a signalling device
    • B60R25/104Fittings or systems for preventing or indicating unauthorised use or theft of vehicles actuating a signalling device characterised by the type of theft warning signal, e.g. visual or audible signals with special characteristics
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/30Detection related to theft or to other events relevant to anti-theft systems
    • B60R25/302Detection related to theft or to other events relevant to anti-theft systems using recording means, e.g. black box
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/30Detection related to theft or to other events relevant to anti-theft systems
    • B60R25/305Detection related to theft or to other events relevant to anti-theft systems using a camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/30Detection related to theft or to other events relevant to anti-theft systems
    • B60R25/31Detection related to theft or to other events relevant to anti-theft systems of human presence inside or outside the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2325/00Indexing scheme relating to vehicle anti-theft devices
    • B60R2325/20Communication devices for vehicle anti-theft devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2325/00Indexing scheme relating to vehicle anti-theft devices
    • B60R2325/20Communication devices for vehicle anti-theft devices
    • B60R2325/202Personal digital assistant [PDA]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2325/00Indexing scheme relating to vehicle anti-theft devices
    • B60R2325/20Communication devices for vehicle anti-theft devices
    • B60R2325/205Mobile phones

Abstract

The application provides a method, a device, equipment and a storage medium for scene triggering, wherein the method comprises the following steps: under the condition that the vehicle is in an unmanned state, detecting whether the vehicle is attacked or not according to an execution request of a target scene; and sending reminding information to the mobile terminal and triggering a corresponding scene execution module to execute a function corresponding to the target scene under the condition that the vehicle is detected to be attacked. The technical scheme of the embodiment of the application can provide the security system protection which is safe to go for the user, improves the safety of the vehicle, realizes the intelligent service of the vehicle and improves the user experience.

Description

Scene triggering method, device, equipment and storage medium
Technical Field
The present application relates to the field of intelligent vehicle technologies, and in particular, to a method, an apparatus, a device, and a storage medium for scene triggering.
Background
Various output devices are provided on the vehicle, such as a display screen, an atmosphere light, a seat, a sound, an air conditioner, and the like. These output devices typically perform some function alone and cannot cooperate with each other to achieve some scenario. As vehicle intelligence has developed, users expect vehicles to provide a greater variety of services.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for scene triggering, which are used for solving the problems in the related technology, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a scene triggering method, where the method includes:
under the condition that the vehicle is in an unmanned state, detecting whether the vehicle is attacked or not according to an execution request of a target scene;
and sending reminding information to the mobile terminal and triggering a corresponding scene execution module to execute a function corresponding to the target scene under the condition that the vehicle is detected to be attacked.
In a second aspect, an embodiment of the present application provides a scene triggering apparatus, including:
the detection module is used for detecting whether the vehicle is attacked or not according to the execution request of the target scene under the condition that the vehicle is in an unmanned state;
and the first triggering module is used for sending reminding information to the mobile terminal and triggering the corresponding scene execution module to execute the function corresponding to the target scene under the condition that the vehicle is detected to be attacked.
In a third aspect, an embodiment of the present application provides a scene trigger device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the scenario trigger method of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when the computer instructions are executed by a processor, the computer-readable storage medium implements the scenario triggering method of the present application.
The advantages or beneficial effects in the above technical solution at least include: according to the method, when the vehicle is locked and no person is in the vehicle, whether the vehicle is attacked or not is monitored, a prompt is sent to a user, and a corresponding scene execution module is triggered to execute a corresponding function, such as a warning function or a function of physically limiting access, so that safe and comprehensive security system protection is provided for the user, the safety of the vehicle is improved, intelligent service of the vehicle is achieved, and user experience is improved.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
Fig. 1 is an application architecture diagram of a scene triggering method according to an embodiment of the present application;
fig. 2 is a flowchart of a scenario triggering method according to an embodiment of the present application;
FIG. 3 is a schematic view of a projection lamp display effect of an outer lamp module according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an ISD screen display effect of an exterior light module according to the present application;
FIG. 5 is a schematic view of an outer lamp module implemented in accordance with the present application;
fig. 6 is a timing diagram of an application example of a scene trigger method according to an embodiment of the present application;
fig. 7 is a flowchart of an application example of a scenario trigger method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a scene trigger apparatus according to an embodiment of the present application;
fig. 9 is a schematic diagram of a scene triggering device according to an embodiment of the present application.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
The application provides a scene triggering method for triggering a corresponding scene execution module at a vehicle end to execute a corresponding function so as to realize a target scene.
In one example, as shown in fig. 1, each user (including the first user) may edit a scene at its mobile terminal or vehicle end (a1), upload the edited scene to the cloud scene server (a2), and save the edited scene to the scene database (A3). The mobile terminal comprises mobile intelligent equipment such as a mobile phone and a tablet personal computer. The management terminal may send a scenario query request to the cloud scenario server to request a query scenario (B1 and B2). The cloud scene server inquires one or more scenes to be pushed from the scene database and sends the scenes to the management terminal as a scene inquiry result (B3). The management terminal may receive the scene query result from the cloud scene server, so as to obtain the scene to be pushed (B3). And the management terminal screens the scenes to be pushed to obtain the selected scenes. The management terminal may also configure some initial scenarios as pick scenarios. The management terminal sends a scene push request to the message management center to request the refined scene push (B5), and the message management center pushes the refined scene to the corresponding vehicle end (B6).
After the vehicle end receives the pushed scenes, the scenes can be synchronized to a scene service module (C1) at the vehicle end. The condition management module at the vehicle end monitors the current scene condition at the vehicle end (C2), and when the current scene condition meets the scene trigger condition, the scene engine module at the vehicle end executes the scene (C3).
After the vehicle end executes the scene, the scene execution result may be uploaded to the cloud scene server (C4). The cloud scene server is provided with a big data center and presets a data burying point (C5).
Fig. 2 shows a flowchart of a scenario triggering method according to an embodiment of the present application. As shown in fig. 2, the method may include:
step S201, under the condition that the vehicle is in an unmanned state, whether the vehicle is attacked or not is detected according to an execution request of a target scene.
The target scene may be a scene selected by a user. For example: the user can select a target scene from a plurality of scenes prestored in the scene service module based on a vehicle-mounted scene Application program (APP) at the vehicle end, and further trigger an execution request of the target scene. The method for selecting the target scene based on the vehicle-mounted scene APP includes but is not limited to screen interface selection, voice triggering and the like.
In addition, the user can also select a target scene based on the scene application APP on the mobile terminal, and then send an execution request of the target scene to the vehicle end through network communication between the mobile terminal and the vehicle end. In the embodiment of the application, the target scene may be a guard assist/security mode scene.
In one example, in the case where the user leaves the vehicle so that the vehicle is in an unmanned state, the user may select the target scene in any manner, thereby generating an execution request for the target scene on the vehicle side. After receiving the execution request of the target scene, the vehicle end detects whether the vehicle is attacked, such as the vehicle is impacted by external force in a static state, or the vehicle is invaded illegally. Wherein it can be detected whether the vehicle is under attack based on sensors or camera devices etc. on the vehicle.
Step S202, sending reminding information to the mobile terminal and triggering a corresponding scene execution module to execute a function corresponding to a target scene under the condition that the vehicle is detected to be attacked.
In one example, in the event that a vehicle attack is detected, triggering a target scenario launch includes: and sending the vehicle attacked reminding information to a scene application APP on the user mobile terminal.
In one implementation, the method of the embodiment of the present application may further include: and under the condition that the vehicle is in an unmanned state, triggering the automobile data recorder to work according to the execution request of the target scene.
That is, when the user selects a target scene, the trip recorder is automatically started and performs real-time shooting. And further, under the condition that the vehicle is detected to be attacked, the reminding message sent to the mobile terminal also comprises multimedia resources acquired by the travel recorder.
Further, after the reminding message is sent to the mobile terminal, whether the user of the mobile terminal has read the reminding message or not can be confirmed; and sending an alarm signal under the condition that the user of the mobile terminal does not read the reminding information.
In one example, a signal may be sent to a Customer Service Center (CSC) in the event that a vehicle is detected to be under attack. The CSC may check whether the user of the mobile terminal has checked or read the alert message after receiving the signal. If the reminding message is not viewed or read, an alarm signal can be sent, such as a notification alarm in the form of a short message, a WeChat or a telephone.
Each scene has scene configuration information, including information of the scene execution module and information of the execution function under different trigger conditions. In the target scenario of this embodiment, the configuration information of the target scenario includes the scenario execution module that needs to be triggered and the function that needs to be executed by the scenario execution module under different trigger conditions, so as to alert an attacker or limit the attack behavior of an intruder. Wherein the triggering condition may be that the vehicle is detected to be attacked.
It should be noted that the scene configuration information may be preset according to an actual situation, for example, the scene configuration information is already set in a scene pushed to the vehicle end by the message management center, that is, the scene configuration information is set by the management terminal or is edited by the user in advance. The scene configuration information can also be edited and updated by the user after the target scene is downloaded from the vehicle end. For example: one or more items of content in the scene configuration information of the target scene may be thermally updated according to the user edit information.
In one example, the user may edit the trigger condition, and may also edit a function or a parameter to be executed by a certain scene execution module, such as editing a time parameter (including a time or duration of the trigger) of the function executed by each scene execution module, or editing a display effect of the ambience light or exterior light module. The scene configuration information of the vehicle terminal can be conveniently updated by the content edited by the user through thermal update, so that the personalized scene is realized, and the user experience is further improved. In addition, the target scene edited by the user can be shared by other users, and scene sharing and interaction among the users are realized.
In this embodiment, the scene execution module may include a speaker, an (in-vehicle) atmosphere lamp, an exterior lamp module, a seat module, a screen display module (such as a center control screen), an animation voice interaction module, and the like. The animation voice interaction module can be an Artificial Intelligence robot (Artificial Intelligence robot), such as an avatar designed by a developer, and is used for performing animation voice interaction with a vehicle-end user.
In one embodiment, step S202 may include: triggering the interior atmosphere lamp and the exterior lamp module of the vehicle to display a dynamic warning effect; and/or triggering a loudspeaker to give out an alarm sound; and/or triggering the seat module and the steering wheel module to adjust to a state of restricting seating (entry); and/or triggering an animation voice interaction module to display a dynamic warning effect; and/or triggering the display screen to enter the warning interface.
Referring to fig. 3 to 5, the external light module includes at least one of an Interactive Signal Display (ISD) light 52, a projection light, and a through light. The projection lamp may be a Digital Light Processing (DLP) projection lamp 51. The DLP projection lamp 51 may be used for a conventional high beam and low beam lamp, and may also be used for projecting projection data such as video or pictures. Fig. 3 shows an example diagram of the projection effect of the DLP projection lamp 51. The ISD lights 52 may include conventional lights 521 (e.g., daytime running lights, position lights, turn lights, stop lights, backup lights, Logo lights, front and rear pass lights, etc.) and ISD screens 522. The corresponding light-off effect is achieved by a dynamic display of the conventional light 521. The ISD screen 522 may be a matrix type screen formed of a plurality of Light Emitting Diode (LED) lamps, and may be used to display pictures, animations, and the like. Fig. 4 illustrates an example diagram of a display effect of the ISD screen 522.
In one example, as shown in fig. 5, the exterior lamp module may include left front lamps (DLP projection lamps 51 and ISD lamps 52), right front lamps (DLP projection lamps 51 and ISD lamps 52), left rear lamps (ISD lamps 52), and right rear lamps (ISD lamps 52) of a vehicle. The front and rear ISD lights 52 have 4 groups, and each group of ISD lights 52 includes a conventional light 521 and an ISD screen 522 under the conventional light 521.
In one example, as shown in FIG. 6, a dynamic warning effect may be triggered in which the in-vehicle mood light is displayed as a red fast-flowing; the alarm sound with the buzzing effect can be triggered by the loudspeaker; the exterior light module can be triggered to display dynamic warning effects (such as displaying an | ' on the ISD screen 522 of the ISD light 52 before and after triggering, displaying a dynamic warning effect of red fast flowing through the light before and after triggering, and triggering a DLP projection light to project an ' SOS '), so that the scene function of the movement of the guard is realized.
Further, as shown in fig. 6, the seat can be triggered to move to the forefront and the backrest can be adjusted to the forefront, and the height of the steering wheel is adjusted to the lowest, so that the sitting space is narrow, the sitting state is limited, and the invasion and theft are inconvenient; triggering an animation voice interaction module (such as AI iRobot) to display a dynamic warning effect of an angry state, and expressing the vehicle emotion through voice; and triggering a display screen (such as a central control screen) to enter a warning interface, for example, displaying a document 'recorded and alarming', thereby realizing the scene function of deterrence.
In one embodiment, the time when each scene execution module is triggered may be executed according to a time parameter in the scene configuration information.
For example: under the condition that a first preset time condition is reached, triggering at least one first scene execution module to execute a function corresponding to first scene configuration information; and/or under the condition that a second preset time condition is reached, triggering at least one second scene execution module to execute a function corresponding to the second scene configuration information.
The preset time condition may be a time interval after the target scene is started. For example, as shown in fig. 6, 2s after the target scene is started, that is, the first preset time condition is reached, the first scene execution module such as the vehicle interior atmosphere lamp, the loudspeaker, and the exterior lamp module is triggered to execute the function corresponding to the first scene configuration information (as in the above example of the guard moving), so as to implement the scene function of the guard moving. 5s after the target scene is started, namely the second preset time condition is reached, the second scene execution modules such as the seat module, the steering wheel module, the animation voice interaction module and the display screen are triggered to execute the function corresponding to the second scene configuration information (as in the example of the deterrence), and therefore the scene function of the deterrence is achieved.
As another example, as shown in fig. 6, in the case of detecting that the vehicle is attacked, the target scene is triggered to start, and when the target scene starts, the scene function of the sentinel in advance is realized.
An application example of the scenario trigger method according to the embodiment of the present application is described below with reference to fig. 6 and 7. As shown in fig. 7, the exemplary scenario trigger method includes:
(1) selecting a target scene: the user can select a target scene through a scene card on a screen interface of the automobile end, can trigger the target scene through voice, and can select the target scene through an APP of the mobile terminal. After the target scene is selected, the automobile data recorder is started to acquire multimedia resources in real time, namely, a camera of the automobile data recorder is used for shooting in real time to acquire a video.
(2) Triggering a target scene: in the case that the vehicle is detected to be attacked, for example, the vehicle is impacted by external force in a static state, or the vehicle is invaded illegally.
(3) When the target scene is started, the antecedent scene function of the sentinel is executed: and sending reminding information to the mobile terminal, wherein the reminding information comprises multimedia resources with preset duration, such as 30s video. Meanwhile, the CSC may check whether the user of the mobile terminal has checked or read the alert message after receiving the signal, and may send an alarm signal, such as an alarm notification in the form of a short message, a WeChat, or a telephone, if the user has not checked or read the alert message.
(4) 2s after the target scene is started, executing the scene function of the guard moving: triggering a horn to give out an alarm sound with a buzzing effect; the ISD screen 522 of the ISD lights 52 before and after triggering displays "! "; the penetration lamp displays the dynamic warning effect of red fast flowing before and after triggering; the DLP projection lamp is triggered to project the "SOS".
(5) 5s after the target scene is started, executing scene functions of the deterrence: triggering the seat to move to the forefront and adjusting the backrest to the forefront, and adjusting the height of the steering wheel to the lowest; triggering an AI iRobot to show a dynamic warning effect of an angry state, and expressing the emotion of the vehicle through voice; and triggering the central control screen to display the file 'recorded and alarming'.
(6) And 25s after the target scene is started, executing the scene function of full-member team collection: each scene execution module suspends the corresponding function.
(7) And 2 minutes (min) after the target scene is started, restoring each scene execution module to an initial state before the target scene is started.
According to the method, when the vehicle is locked and no person is in the vehicle, whether the vehicle is attacked or not is monitored, a prompt is sent to a user, and a corresponding scene execution module is triggered to execute a corresponding function, such as a warning function or a function of physically limiting entering, so that safe and thorough protection of a security system is provided for the user.
An embodiment of the present application further provides a scene triggering apparatus, as shown in fig. 8, the apparatus may include:
a detection module 801, configured to detect whether a vehicle is attacked according to an execution request of a target scene when the vehicle is in an unmanned state;
the first triggering module 802 is configured to send a notification message to the mobile terminal and trigger the corresponding scene execution module to execute a function corresponding to the target scene when it is detected that the vehicle is attacked.
In an implementation manner, the reminding information includes multimedia resources acquired by the automobile data recorder, and the device further includes a second triggering module, configured to trigger the automobile data recorder to work according to an execution request of a target scene when the vehicle is in an unmanned state.
In one embodiment, the device further comprises a confirmation module, configured to confirm whether the user of the mobile terminal has read the reminder information after sending the reminder information to the mobile terminal; and the alarm signal sending module is used for sending an alarm signal under the condition that the user of the mobile terminal does not read the reminding information.
In one embodiment, the first triggering module 802 is configured to trigger the interior atmosphere light and exterior light module of the vehicle to exhibit a dynamic warning effect; and/or triggering a loudspeaker to give out an alarm sound; and/or triggering the seat module and the steering wheel module to adjust to a state of restricting seating; triggering an animation voice interaction module to display a dynamic warning effect; and/or triggering the display screen to enter the warning interface.
In one embodiment, the exterior light module includes at least one of an interactive signal display light, a projection light, and a through light.
In an implementation manner, the apparatus further includes a hot update module, configured to hot update one or more items of content in the scene configuration information of the target scene according to the user editing information, where the scene configuration information includes a scene execution module that needs to be triggered and a function that needs to be executed by the scene execution module that needs to be triggered under different trigger conditions.
The functions of each module in each apparatus in the embodiment of the present application may refer to corresponding descriptions in the above method, and are not described herein again.
Fig. 9 shows a block diagram of a scene trigger device according to an embodiment of the present application. As shown in fig. 9, the apparatus includes: a memory 901 and a processor 902, the memory 901 having stored therein instructions executable on the processor 902. The processor 902, when executing the instructions, implements any of the methods in the embodiments described above. The number of the memory 901 and the processor 902 may be one or more. The terminal or server is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The terminal or server may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
The device may further include a communication interface 903, which is used for communicating with an external device to perform data interactive transmission. The various devices are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor 902 may process instructions for execution within the terminal or server, including instructions stored in or on a memory to display graphical information of a GUI on an external input/output device (such as a display device coupled to an interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple terminals or servers may be connected, with each device providing portions of the necessary operations (e.g., as an array of servers, a group of blade servers, or a multi-processor system). The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
Optionally, in a specific implementation, if the memory 901, the processor 902, and the communication interface 903 are integrated on a chip, the memory 901, the processor 902, and the communication interface 903 may complete mutual communication through an internal interface.
It should be understood that the processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be an advanced reduced instruction set machine (ARM) architecture supported processor.
Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 901), which stores computer instructions, and when executed by a processor, the program implements the method provided in the embodiments of the present application.
Alternatively, the memory 901 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of a terminal or a server, and the like. Further, the memory 901 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 901 may optionally include memory located remotely from the processor 902, which may be connected to a terminal or server over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more (two or more) executable instructions for implementing specific logical functions or steps in the process. And the scope of the preferred embodiments of the present application includes other implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. All or part of the steps of the method of the above embodiments may be implemented by hardware that is configured to be instructed to perform the relevant steps by a program, which may be stored in a computer-readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module may also be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present application, and these should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A method for triggering a scene, comprising:
under the condition that the vehicle is in an unmanned state, detecting whether the vehicle is attacked or not according to an execution request of a target scene;
and sending reminding information to the mobile terminal and triggering a corresponding scene execution module to execute the function corresponding to the target scene under the condition that the vehicle is detected to be attacked.
2. The method of claim 1, wherein the reminder information comprises a multimedia asset acquired by a vehicle event recorder, the method further comprising:
and under the condition that the vehicle is in an unmanned state, triggering the automobile data recorder to work according to the execution request of the target scene.
3. The method of claim 2, wherein after sending the alert message to the mobile terminal, further comprising:
confirming whether the user of the mobile terminal has read the reminding information or not;
and sending an alarm signal under the condition that the user of the mobile terminal does not read the reminding information.
4. The method of claim 1, wherein triggering the corresponding scene execution module to perform the function corresponding to the target scene comprises:
triggering the interior atmosphere lamp and the exterior lamp module of the vehicle to display a dynamic warning effect; and/or
Triggering a horn to give out an alarm sound; and/or
Triggering the seat module and the steering wheel module to adjust to a state of restricting sitting;
triggering an animation voice interaction module to display a dynamic warning effect; and/or
And triggering the display screen to enter the warning interface.
5. The method of claim 4, wherein the external light module comprises at least one of an interactive signal display light, a projection light, and a pass-through light.
6. The method of any of claims 1 to 5, further comprising:
and thermally updating one or more items of contents in the scene configuration information of the target scene according to user editing information, wherein the scene configuration information comprises scene execution modules needing to be triggered and functions needing to be executed thereof under different triggering conditions.
7. A scene trigger apparatus, comprising:
the detection module is used for detecting whether the vehicle is attacked or not according to the execution request of the target scene under the condition that the vehicle is in an unmanned state;
and the first triggering module is used for sending reminding information to the mobile terminal and triggering the corresponding scene execution module to execute the function corresponding to the target scene under the condition that the vehicle is detected to be attacked.
8. The apparatus of claim 7, wherein the reminder information comprises a multimedia resource acquired by a vehicle event recorder, the apparatus further comprising:
and the second triggering module is used for triggering the automobile data recorder to work according to the execution request of the target scene under the condition that the vehicle is in an unmanned state.
9. The apparatus of claim 8, further comprising:
the confirmation module is used for confirming whether a user of the mobile terminal reads the reminding information or not after the reminding information is sent to the mobile terminal;
and the alarm signal sending module is used for sending an alarm signal under the condition that the user of the mobile terminal does not read the reminding information.
10. The apparatus of claim 8, wherein the first triggering module is configured to:
triggering the interior atmosphere lamp and the exterior lamp module of the vehicle to display a dynamic warning effect; and/or
Triggering a horn to give out an alarm sound; and/or
Triggering the seat module and the steering wheel module to adjust to a state of restricting sitting;
triggering an animation voice interaction module to display a dynamic warning effect; and/or
And triggering the display screen to enter the warning interface.
11. The apparatus of claim 10, wherein the outer light module comprises at least one of an interactive signal display light, a projection light, and a pass-through light.
12. The apparatus of any one of claims 7 to 11, further comprising:
and the hot updating module is used for hot updating one or more items of contents in the scene configuration information of the target scene according to the user editing information, wherein the scene configuration information comprises the scene execution module which needs to be triggered and the functions which need to be executed under different triggering conditions.
13. A scene trigger apparatus comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 6.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN202010931368.5A 2020-09-07 2020-09-07 Scene triggering method, device, equipment and storage medium Active CN112061075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010931368.5A CN112061075B (en) 2020-09-07 2020-09-07 Scene triggering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010931368.5A CN112061075B (en) 2020-09-07 2020-09-07 Scene triggering method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112061075A true CN112061075A (en) 2020-12-11
CN112061075B CN112061075B (en) 2021-11-05

Family

ID=73664061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010931368.5A Active CN112061075B (en) 2020-09-07 2020-09-07 Scene triggering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112061075B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113602090A (en) * 2021-08-03 2021-11-05 岚图汽车科技有限公司 Vehicle control method, device and system
CN114070880A (en) * 2021-11-12 2022-02-18 上汽通用五菱汽车股份有限公司 Vehicle user-defined mode implementation method and device, electronic equipment and storage medium
CN115320622A (en) * 2022-10-12 2022-11-11 集度科技有限公司 Vehicle control method, system, electronic device and computer program product
CN115437705A (en) * 2022-08-02 2022-12-06 广州汽车集团股份有限公司 Method and device for providing vehicle service, electronic equipment and storage medium
WO2024051569A1 (en) * 2022-09-05 2024-03-14 华为技术有限公司 Scene display method and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127146A (en) * 2006-08-16 2008-02-20 北京亿阳增值业务通信股份有限公司 Alarm system and vehicle alarming method based on the alarm system
KR20140053453A (en) * 2012-10-26 2014-05-08 주식회사 디에스인터내셔널 Method for controlling blackbox of vehicle by using mobile terminal equipment
CN105501182A (en) * 2015-11-30 2016-04-20 深圳市元征软件开发有限公司 Vehicle monitoring method, device and system
CN108340873A (en) * 2017-01-23 2018-07-31 长城汽车股份有限公司 Security method, system and the vehicle of vehicle
CN108501866A (en) * 2017-02-28 2018-09-07 长城汽车股份有限公司 Anti-theft alarm system for vehicles and alarm method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127146A (en) * 2006-08-16 2008-02-20 北京亿阳增值业务通信股份有限公司 Alarm system and vehicle alarming method based on the alarm system
KR20140053453A (en) * 2012-10-26 2014-05-08 주식회사 디에스인터내셔널 Method for controlling blackbox of vehicle by using mobile terminal equipment
CN105501182A (en) * 2015-11-30 2016-04-20 深圳市元征软件开发有限公司 Vehicle monitoring method, device and system
CN108340873A (en) * 2017-01-23 2018-07-31 长城汽车股份有限公司 Security method, system and the vehicle of vehicle
CN108501866A (en) * 2017-02-28 2018-09-07 长城汽车股份有限公司 Anti-theft alarm system for vehicles and alarm method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113602090A (en) * 2021-08-03 2021-11-05 岚图汽车科技有限公司 Vehicle control method, device and system
CN114070880A (en) * 2021-11-12 2022-02-18 上汽通用五菱汽车股份有限公司 Vehicle user-defined mode implementation method and device, electronic equipment and storage medium
CN115437705A (en) * 2022-08-02 2022-12-06 广州汽车集团股份有限公司 Method and device for providing vehicle service, electronic equipment and storage medium
CN115437705B (en) * 2022-08-02 2024-04-12 广州汽车集团股份有限公司 Method, device, electronic equipment and storage medium for providing vehicle service
WO2024051569A1 (en) * 2022-09-05 2024-03-14 华为技术有限公司 Scene display method and electronic device
CN115320622A (en) * 2022-10-12 2022-11-11 集度科技有限公司 Vehicle control method, system, electronic device and computer program product

Also Published As

Publication number Publication date
CN112061075B (en) 2021-11-05

Similar Documents

Publication Publication Date Title
CN112061075B (en) Scene triggering method, device, equipment and storage medium
US11041732B2 (en) Facilitating rider pick-up for a transport service
US10017156B2 (en) Vehicle security system
WO2018202016A1 (en) Vehicle-mounted intelligent device and safety precaution method, server and system based thereon
CN112061049B (en) Scene triggering method, device, equipment and storage medium
US20160112461A1 (en) Collection and use of captured vehicle data
US10944586B2 (en) Systems and methods for home automation monitoring
CN104980343A (en) Sharing method and system of road condition information, automobile data recorder, and cloud server
CN105357237A (en) Transmission method of driving record, automobile data recorder and system
EP3754618B1 (en) Recording control device, recording control system, recording control method, and recording control program
CN110708371A (en) Data processing method, device and system based on block chain and electronic equipment
CN104802756A (en) Method for tracking various vehicle-mounted terminals through mobile phone remote positioning
CN111740895A (en) Message notification method and device
CN112061050A (en) Scene triggering method, device, equipment and storage medium
CN110562177A (en) Alarm system, method and vehicle-mounted terminal
CN110198425A (en) Vehicular video recording method, device, storage medium and electronic device
CN108305476A (en) A kind of method, system and ambulance obtaining illegal vehicle information
CN107226062A (en) A kind of vehicle fuel burglary-resisting system and method
CN116841431A (en) Sentinel warning method, terminal device and computer program product
CN116206431A (en) Alarm system, alarm message transmission method, computer program product and vehicle
CN110853311B (en) Vehicle alarm method and device
CN112669557A (en) Alarm processing method and device, electronic equipment and readable storage medium
CN207535864U (en) A kind of vehicle fuel burglary-resisting system
CN111556131B (en) Help seeking information processing method, device and system
WO2013155623A1 (en) System and method for processing image or audio data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant