GB2540878A - Method for synchronizing a display unit of a vehicle and a remote unit - Google Patents

Method for synchronizing a display unit of a vehicle and a remote unit Download PDF

Info

Publication number
GB2540878A
GB2540878A GB1612050.3A GB201612050A GB2540878A GB 2540878 A GB2540878 A GB 2540878A GB 201612050 A GB201612050 A GB 201612050A GB 2540878 A GB2540878 A GB 2540878A
Authority
GB
United Kingdom
Prior art keywords
vehicle
user interface
unit
remote unit
remote
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1612050.3A
Other versions
GB201612050D0 (en
Inventor
Hill Andrew
Niedhammer Florian
Ouellette Matthew
Smiroldo Rigel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mercedes Benz Group AG
Original Assignee
Daimler AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daimler AG filed Critical Daimler AG
Priority to GB1612050.3A priority Critical patent/GB2540878A/en
Publication of GB201612050D0 publication Critical patent/GB201612050D0/en
Publication of GB2540878A publication Critical patent/GB2540878A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/12Synchronisation between the display unit and other units, e.g. other display units, video-disc players
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0492Change of orientation of the displayed image, e.g. upside-down, mirrored
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/14Use of low voltage differential signaling [LVDS] for display data communication
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/16Use of wireless transmission of display information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A display unit of a vehicle is synchronized with a user interface of a remote unit being different from the vehicle. The remote unit may be a mobile phone or tablet. A processing unit of the vehicle receives a video feed from the remote unit, the video feed characterizing the user interface of the remote unit. The processing unit performs optical character recognition (OCR) on at least a part of the video feed and generates a second user interface on the basis of the optical character recognition. The second user interface is presented on at least one display of the display unit. The second user interface may resemble the first user interface at least with respect to letters of the respective user interfaces.

Description

Method for Synchronizing a Display Unit of a Vehicle and a Remote Unit
The invention relates to a method for synchronizing at least one display unit of a vehicle, and a user interface of a remote unit.
Vehicles comprising display units are well-known from the general prior art. For example, the display unit is a component of a head unit of a vehicle, wherein, for example, the head unit is part of an infotainment system of the vehicle. The display unit can comprise at least one display configured to present a user interface which is also referred to as a graphical user interface (GUI). Usually, the display unit and, thus, the display are arranged in the interior of the vehicle so that a person being in the interior can at least optically perceive the display unit and, especially, the user interface shown on the display.
Moreover, it is well-known from the general prior art to connect a remote unit with a vehicle, wherein, for example, the remote unit is a mobile terminal such as a mobile phone, a smartphone, a tablet, etc. The remote unit also comprises at least one display configured to present or show a user interface. The respective user interface is usually used to optically present a current status of the vehicle or the remote unit respectively. It has been shown that a possible discrepancy in the respective user interfaces can cause misinformation.
It is therefore an object of the present invention to provide a method by means of which a particularly advantageous interaction between a vehicle and a remote unit being different from the vehicle can be realized.
This object is solved by a method having the features of patent claim 1. Advantageous embodiments with expedient developments of the invention are indicated in the other patent claims.
The present invention relates to a method for synchronizing at least one display unit of a vehicle, and a user interface of a remote unit, wherein the remote unit is different from the vehicle. This means the remote unit is not a component of the vehicle, but the vehicle and the remote unit are separate and independent components of their own. For example, the remote unit is a mobile terminal such as a mobile phone, smartphone, tablet, PC, etc.
The method according to the present invention comprises a first step in which, by a processing unit of the vehicle, a video feed is received from the remote unit. In other words, the video feed is transmitted from the remote unit to the processing unit. For example, the processing unit and the display unit of the vehicle are components of a head unit or infotainment system so that, for example, the display unit can be controlled by the processing unit. The video feed characterizes or defines the user interface of the remote unit. For example, said user interface of the remote unit is a graphical user interface (GUI) which is shown on a display of the remote unit.
The method according to the present invention comprises a second step in which, by the processing unit, optical character recognition (ORC) is performed on at least a part of the video feed. For example, the processing unit performs optical character recognition on at least one specifically formatted part of the video feed. Said optical character recognition can be performed in order to determine a state of the user interface of the remote unit and data used to enable its functionality.
The method according to the present invention further comprises a third step in which, by the processing unit, a second user interface is generated on the basis of the optical character recognition. For example, by performing said optical character recognition, information is gathered, said information being indicative of the user interface of the remote unit, wherein the user interface of the remote unit is also referred to as a first user interface. Said information is used to generate said second user interface.
The method according to the present invention further comprises a fourth step in which the second user interface is presented on at least one display of the display unit. In other words, said information gathered by said optical character recognition is used to ensure the display unit, in particular its display, is in a congruent state with respect to the remote unit, in particular its display.
By means of the method according to the present invention an excessive discrepancy between the user interface of the remote unit and the user interface of the display unit, i.e. an excessive discrepancy between the remote unit and the display unit can be avoided so that misinformation can be avoided. Since the second user interface is generated on the basis of optical character recognition performed on the video feed characterizing the first user interface the user interfaces can be similar or resemble one another so that, for example, the user interfaces comprise the same or similar information content. By means of the method according to the present invention an excessive discrepancy between the display unit of the vehicle and the remote unit can be avoided even if the remote unit does not share specific information with the vehicle. In other words, usually, when a remote unit does not share specific information with a vehicle this can lead to different information content and different user experiences in different states. This problem can be avoided by means of the method according to the present invention.
Further advantages, features, and details of the invention derive from the following description of a preferred embodiment as well as from the drawing. The features and feature combinations previously mentioned in the description as well as the features and feature combinations mentioned in the following description of the figures and/or shown in the figures alone can be employed not only in the respectively indicated combination but also in other combinations or taken alone without leaving the scope of the invention.
The drawing shows in:
Fig. 1 a flow diagram illustrating a method for synchronizing at least one display unit of a vehicle, and a user interface of a remote unit;
Fig. 2 a schematic view of a typical telematics control;
Fig. 3 a flow diagram for illustrating a common OCR workflow;
Fig. 4 a schematic view of the display unit;
Fig. 5 a further schematic view of the display unit; and
Fig. 6 a further schematic view of the display unit for illustrating a blob extraction.
In the figures the same elements or same elements having the same functions are indicated by the same reference signs.
Fig. 1 shows a flow diagram illustrating a method for synchronizing at least one display unit of a vehicle and a user interface of a remote unit. For example, the remote unit comprises at least one display, and the user interface of the remote unit is also referred to as a first user interface which is shown on the at least one display of the remote unit. For example, said display unit of the vehicle can comprise at least one display which is, for example, arranged in an interior of a vehicle so that said display unit is a component of the vehicle. However, the remote unit is different from the vehicle. This means the remote unit is not a component of the vehicle but a separate and independent component. The remote unit can be a mobile terminal such as a mobile phone, smartphone, tablet PC, etc.
For example, the first user interface comprises certain information content so that the first user interface is used to optically present a current state of the remote unit to persons looking at the display of the remote unit. For example, the remote unit is connected with the vehicle. The remote unit can be connected to the vehicle via a mechanical or physical connection comprising at least one cable which is connected to the vehicle and the remote unit respectively. Alternatively or additionally, the remote unit can be connected with the vehicle via a wireless data connection. Said method is performed to avoid an excessive discrepancy between the remote unit, in particular its first user interface, and the display unit, in particular a second user interface shown on the display of the display unit of the vehicle.
For example, the second user interface is used to optically present information to persons looking at the display of the display unit by, for example, optically presenting a current state of the remote unit to persons looking at the display of the display unit of the vehicle. However, usually, since a remote unit does not necessarily share specific information with the vehicle the first user interface, in particular its information content, can differ from the second user interface, in particular its information content. This discrepancy can lead to misinformation and a cumbersome operation of the vehicle.
These disadvantages and problems can be avoided by said method which comprises a first step S1 in which, by a processing unit of a vehicle, a video feed from the remote unit is received. This means the video feed is transmitted from the remote unit to the processing unit, for example, via said connection between the remote unit and the vehicle. Said video feed characterizes the first user interface of the remote unit.
The method further comprises a second step S2 in which, by the processing unit, optical character recognition (OCR) is performed on at least a part of the video feed. For example, the processing unit is also a component of the vehicle which comprises the display unit and the processing unit so that, for example, the processing unit is configured to control the display unit, in particular its display.
In a third step S3 of said method, a second user interface is, by the processing unit, generated on the basis of the optical character recognition. The method also comprises a fourth step S4 in which the second user interface is presented on said display of the display unit of the vehicle. The processing unit is also a component of the vehicle and analyzes the video feed coming from the remote unit. Moreover, the processing unit performs optical character recognition on, for example, specifically formatted parts of the video feed in order to determine a current state of the remote unit and data used to enable its functionality thereby gathering information about the remote unit, in particular its first user interface. Then, said information is used to ensure the user interfaces of the display unit of the vehicle and the remote unit are in a congruent state so that, for example, the user interfaces have similar or the same information content based on the specifically formatted parts of the processed video feed. Thus, misinformation can be avoided.
In the following, a use case of the above-described method will be described. The basic idea of said method is that the emergence of remote Ul (User Interface) technologies in the automotive field gives consumers a variety of choices of interfaces for their telematics system. This can, however, leave the user in the position of having multiple Uls with different data in incongruent states, which makes for a bad user experience. In this regard, image detection can be used to detect certain elements in the video stream of the remote Ul (User Interface of the remote unit) in order to synchronize the state of the onboard head unit (display unit of the vehicle) to the state of the remote Ul (User Interface of the remote unit).
This value can be added to telematics systems and the native Ul (User Interface of the display unit of the vehicle) can be accessed from the remote Ul.
In the last few years, several technologies have surface that allow a user to provide their own user interface through their smartphone, i.e. the remote unit. These include MirrorLink, Mercedes Digital DriveStyle, Apple CarPlay and Android Automotive, and provide a method to use the vehicle’s automotive controller and display to control the remote Ul provided by the user’s smartphone (remote unit). Most automotive OEMs as well as several aftermarket manufacturers have committed to or delivered remote Ul solutions.
The visual design of the head unit as well as its user experience form an important part of an overall vehicle experience, so there is significant concern about relinquishing control of the head unit user experience to an external company. External companies, such as handset manufacturers, are unlikely to share the same values as the manufacturer of the vehicle, perhaps favoring features over safety or class. Additionally, telematics systems are sold in different configurations for different price points and markets, so commoditization of the head unit represents a loss potential for the manufacturer of the vehicle.
Some features, such as the navigation systems, are inherently more effective with access to the vehicle bus. In this case, the built-in head unit has access to CAN-signals that provide not only GPS signals but also information about real speed in direction of travel. On the other hand mobile handsets, i.e. remote units, have some features that are difficult to implement as effectively in the head unit. For instance, the handset by its nature will have up-to-date connectivity and likely hold user personal data and have access to things like social networks. While the navigation experience itself may be superior on the head unit, it is possible that a remote Ul solution may have a more convenient way to access and enter the address where one intends to travel.
From the handset vendor’s perspective, it makes sense to open up an API to allow the manufacturer of the vehicle to push automotive information to the handsets (remote unit). From the vehicle manufacturer’s perspective, it would make sense for the API to provide manufacturers (OEMs) with user data. It is advantageous for either company to control the Ul layer of the other company to be simply a vendor for the data needed by the Ul. Herein, a method is provided to receive information from the handset (remote unit) without a need for vendors to explicitly provide this information via an API, thereby avoiding losing control of the Ul layer.
In order to provide remote Ul capabilities from a handset (remote unit), the vehicle which is also referred to as a car must provide at a minimum a video path and data from the vehicle’s control elements. This means the car or the manufacturer of the car has access to the raw video feed from the handset regardless of any other information that is available. As an example, Fig. 2 shows a video architecture of, for example, DriveKit Plus with Digital DriveStyle, a Mercedes-branded remote Ul experience.
As the accessory hardware module, provided by the OEM, has full access to the video from the smartphone, it can interpret this video feed to gain further information. This information could include destinations and navigation information or currently played media selections, however, it will be necessarily limited to the information currently being displayed.
Moreover, information can be described from the video feed using optical character recognition (OCR). OCR is a relatively major field especially for well-formed texts and fonts. However, due to the different development times between the automotive and mobile industries, it is likely that formats and typefaces will change in the source image faster than new automotive systems can be deployed to counter these changes. For this reason, an arbitrary OCR algorithm should be used rather than just simply a front matching algorithm. Most OCR algorithms follow a common workflow which is illustrated in Fig. 3: preprocessing, segmentation, feature extraction, classification and post processing.
Preprocessing consists of operations such as color space transformation, noise reduction and image normalizations. As the nature of characters is based on their shape alone and not on their color, in the method the image is transferred from RGB color space into grey scale, than from grey scale into black and white, i.e. binary color space.
Image segmentation is the separation of a single frame into smaller subframes that are easier to examine. The biggest challenge is to determine which subframes are important, and in the present case, which subframes are most likely to contain text, and more specifically the text that is of interest in relation to the said method. For known images and if the area of interest is fixed and does not change, simple cropping of the source image can be executed. A more flexible approach is to use blob detection, which detects a region that is somehow important for the algorithm. For example, a method for blob detection called maximally stable extremal regions (MSER) is used in the method, wherein MSER is a recently created detection method though many others could have been used. Feature extraction is a manipulation of the raw data into a vector of inputs to the classification step. The efficacy of the feature extraction makes a significant difference to the overall performance of classification as it controls what exactly is learned.
Classification takes the input vector created in the feature classification step and works to classify the image as a character. There are different types of classifiers that are able to handle this task. A naive approach is to do pattern matching. In this approach, previously stored images of characters will be used to calculate the normalized cross correlation coefficient together with the character image that is to be classified in the method. The character that shows the highest correlation coefficient with the source character is most likely the character that is actually represented in the image. This approach provides acceptable results if there is only one font which will not change in the future; however since the system should be kept flexible enough to handle possible changes in the font style and size, a more sophisticated approach can be useful. The state-of-the-art approach is the use of artificial neural networks (ANNs), which have the ability to learn based on different character sets and find commonalities. This gives the possibility of the method working with a variety of input fonts, including ones not yet made public or not yet created. Post-processing turns a set of classifications into some data form that can be worked with more easily, for example turning a set of classified characters into a string that can be worked with programmatically.
In the following a technical implementation will be described. The technical implementation of this invention can be performed so as to recognize an address entry in a navigation screen of the remote Ul, i.e. the user interface of the remote unit, and pass this information back to the vehicle’s native head unit (display unit) in order to mirror the navigation on this native system. However, the system can work with any similar remote Ul product, or any other application, such as synchronizing music selections or contact book phone numbers and addresses.
First, the screen from the head unit display is acquired and henceforth will be called a source image. In the implementation, for example, an accessory hardware module is used. In addition, some pre-stored images that will be used as known patterns are loaded from, for example, disk and converted into binary images. The patterns are specific parts of images to identify a position or spot in a known menu for example, or just to find out which image it is that is shown, for example, on the head unit. In text, the normalized-crossed-correlation coefficient is computed between the source image and all patterns. The patterns are smaller and sized than the source image. The normalized-crossed-correlation is used to find patterns in the source image. For each page that it is tested against, the cross-correlation between that and the source image is computed. If a correlation coefficient is sufficiently high a match is declared. Otherwise comparison is continued through the set of test images. Fig. 4 shows an example of an image that meets the correlation comparison threshold. However, Fig. 5 shows an image that does not meet the threshold. After the successful identification of the current screen of Fig. 4 from the remote Ul, the area of interest will be used for the actual optical character classification to be extracted. As discussed above, the blob detection and algorithm known as maximally stable extremal regions (MSER) is used. The results of this step can be seen in Fig. 6. Some post-processing could be necessary after the MSER method in order to verify that the extracted blob has been correctly extracted. Lastly, in this navigation-based example, the OCR captured address information from the remote Ul as shown in Fig. 6 can then be automatically inputted into the native head unit of the vehicle which would then navigate to the address originally selected on the remote Ul creating a seamless navigation experience.
Moreover, in the following, the use of text information is described. For example, Mercedes-Benz head units and accessory hardware units have multiple channels for passing information, and can take this text information reassembled through the OCR process. The implementation in the method is, for example, performed for address data, which can be passed to the navigation unit using internal interface calls; however analogous operations are possible for specific media selections (to synchronize the remote Ul track selection with the native media implementation for the same smartphone) and address book contacts (to synchronize recent callers).
List of reference signs 51 first step 52 second step 53 third step 54 fourth step

Claims (4)

Patent Claims
1. A method for synchronizing at least one display unit of a vehicle, and a user interface of a remote unit being different from the vehicle, the method comprising: - receiving, by a processing unit of the vehicle, a video feed from the remote unit, the video feed characterizing the user interface of the remote unit (step S1); - performing, by the processing unit, optical character recognition on at least a part of the video feed (step S2); - generating, by the processing unit, a second user interface on the basis of the optical character recognition (step S3); and - presenting the second user interface on at least one display of the display unit (step 4).
2. The method according to claim 1, wherein the video feed is received by the processing unit from the remote unit via a wireless data connection between the processing unit and the remote unit.
3. The method according to any one of claims 1 or 2, wherein the second user interface resembles the first user interface at least with respect to letters of the respective user interfaces.
4. A vehicle configured to perform a method according to any one of the preceding claims.
GB1612050.3A 2016-07-12 2016-07-12 Method for synchronizing a display unit of a vehicle and a remote unit Withdrawn GB2540878A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1612050.3A GB2540878A (en) 2016-07-12 2016-07-12 Method for synchronizing a display unit of a vehicle and a remote unit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1612050.3A GB2540878A (en) 2016-07-12 2016-07-12 Method for synchronizing a display unit of a vehicle and a remote unit

Publications (2)

Publication Number Publication Date
GB201612050D0 GB201612050D0 (en) 2016-08-24
GB2540878A true GB2540878A (en) 2017-02-01

Family

ID=56890982

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1612050.3A Withdrawn GB2540878A (en) 2016-07-12 2016-07-12 Method for synchronizing a display unit of a vehicle and a remote unit

Country Status (1)

Country Link
GB (1) GB2540878A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120183221A1 (en) * 2011-01-19 2012-07-19 Denso Corporation Method and system for creating a voice recognition database for a mobile device using image processing and optical character recognition
US20130222223A1 (en) * 2012-02-24 2013-08-29 Nokia Corporation Method and apparatus for interpreting a gesture
US20140129941A1 (en) * 2011-11-08 2014-05-08 Panasonic Corporation Information display processing device
US20150019967A1 (en) * 2013-07-15 2015-01-15 BNC Systems Ltd. System and method for promoting connectivity between a mobile communication device and a vehicle touch screen

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120183221A1 (en) * 2011-01-19 2012-07-19 Denso Corporation Method and system for creating a voice recognition database for a mobile device using image processing and optical character recognition
US20140129941A1 (en) * 2011-11-08 2014-05-08 Panasonic Corporation Information display processing device
US20130222223A1 (en) * 2012-02-24 2013-08-29 Nokia Corporation Method and apparatus for interpreting a gesture
US20150019967A1 (en) * 2013-07-15 2015-01-15 BNC Systems Ltd. System and method for promoting connectivity between a mobile communication device and a vehicle touch screen

Also Published As

Publication number Publication date
GB201612050D0 (en) 2016-08-24

Similar Documents

Publication Publication Date Title
US10671879B2 (en) Feature density object classification, systems and methods
CN110171372B (en) Interface display method and device of vehicle-mounted terminal and vehicle
US10755083B2 (en) Terminal for vehicle and method for authenticating face
US9335826B2 (en) Method of fusing multiple information sources in image-based gesture recognition system
CN108182714B (en) Image processing method and device and storage medium
WO2008038096A1 (en) Improved user interface
KR20210058887A (en) Image processing method and device, electronic device and storage medium
WO2019105457A1 (en) Image processing method, computer device and computer readable storage medium
US11854209B2 (en) Artificial intelligence using convolutional neural network with hough transform
CN111539269A (en) Text region identification method and device, electronic equipment and storage medium
CN111666941A (en) Text detection method and device and electronic equipment
CN111368841A (en) Text recognition method, device, equipment and storage medium
CN105224939B (en) Digital area identification method and identification device and mobile terminal
CN110097057B (en) Image processing apparatus and storage medium
Jwaid et al. Study and analysis of copy-move & splicing image forgery detection techniques
US9053383B2 (en) Recognizing apparatus and method, program, and recording medium
US20230419677A1 (en) Dash cam having anti-theft function and anti-theft system for dash cam
CN106502406A (en) Application program deployment method, device and terminal unit
US20120201470A1 (en) Recognition of objects
GB2540878A (en) Method for synchronizing a display unit of a vehicle and a remote unit
KR102170416B1 (en) Video labelling method by using computer and crowd-sourcing
WO2019133980A1 (en) Backdrop color detection
KR20120035360A (en) Apparatus for recognizing character and method thereof
US10963678B2 (en) Face recognition apparatus and face recognition method
CN114764839A (en) Dynamic video generation method and device, readable storage medium and terminal equipment

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)