.. _ai_models:

=========
AI Models
=========

This SDK makes extensive use of AI models, particularly for minutiae detection and encoding tasks.

The following models are available:

.. list-table:: 
  :width: 100%
  :widths: 20 80
  :header-rows: 1

  * - Name
    - Description
  * - ``DocumentDetector2A``
    - Detects documents. Best balance between accuracy and speed for mobile apps.
  * - ``DocumentDetector2B``
    - Detects documents. Fast detector for low-end mobiles.
  * - ``MrzReader2A``
    - Detects and reads machine-readable zones (MRZ). Stand-alone reader.
  * - ``OcrLatin2A``
    - Recognizes digits and characters (OCR) in latin alphabet.

 
.. important::
    
    | Model files can be downloaded from the following address: 
    | https://cloud.id3.eu/index.php/s/JYJGbn9maingpF9
      


AI model files
==============

AI model files have the ``.id3nn`` extension. You must copy the necessary data files into your application package. If you wish to reduce the size of your application, we recommend copying only the files required by the biometric algorithm.

In your application's source code you should specify location of these files.

.. warning::

    AI model files MUST NOT be renamed.


.. _model_loading:

Loading AI models
=================

It is recommended to load the AI models on application startup. The :ref:`id3_document_document_library_class` provides methods for loading and unloading AI model file in memory.

An example is given below:

.. tab-set::

    .. tab-item:: Python
      :sync: Python

      .. literalinclude:: /../samples/sample.py
        :language: python
        :start-after: [loading_ai_models]
        :end-before: [loading_ai_models]
        :dedent: 4

Processing units
================

The inference of the AI models can be executed on either the CPU or the GPU if available, by specifying a :ref:`id3_document_processing_unit_enum`.
The GPU options selects a default backend depending on your platform. Detailed backend options are available to ensure a specific backend.

.. warning::

  Inference on GPU is an experimental feature. Some models might be unstable on some backend or provide nonsense result. 
  We strongly encourage you to verify the results on those backends, and contact our support in case of inadequate behaviour.


See also
========
- :ref:`id3_document_document_library_class`
- :ref:`id3_document_document_library_load_model_class_method`