.. pyXla documentation master file, created by sphinx-quickstart on Wed May 21 11:59:40 2025. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. ``pyXla`` docs =================== .. toctree:: :maxdepth: 2 :caption: Contents: guides examples reference ``pyXla`` ( **Py** thon package for e **X** plainable **l** andscape **a** nalysis) is a flexible and generic framework for explainable landscape analysis (xLA). Explainability is achieved by incorporating landscape analysis (LA) features that are meant for intuitive interpretation, and accompanied by visualisations alongside metrics whenever possible. Flexibility and generality are achieved by separating sampling from the actual analysis such that all the techniques implemented operate on a unified input format. As a result, the proposed framework can produce results even with a minimum of input components. The focus of other landscape analysis branches such as exploratory landscape analysis (ELA), has leaned more towards automated problem analysis, where low-level features are used to train machine learning models to distinguish between problems for the purpose of automated algorithm selection :cite:p:`renau2021towards`, :cite:p:`gallagher2024towards`. The use of machine learning techniques for analysis of uninterpretable low-level features entails a loss of explainability, where focus has shifted from intuitively understanding the nature of optimisation problems and associated algorithmic performance to reliance on black-box problem characterisation and algorithm selection :cite:p:`renau2021towards`. Explainability is a key aim of LA, and a desirable property, more so in the context of explainable artificial intelligence (xAI). ELA's dependence on machine learning pipelines in LA limits its explainability. In ``pyXla``, inputs are either provided in designated files or by specifying functions to generate the inputs. The inputs are: ``F`` for objective values, ``X`` for solutions, ``V`` for violation values, ``D`` for pairwise distances between solutions, ``N`` for neighbourhood relationships between solutions, and ``I`` for any other additional input. Additionally, the following sampling algorithms for continuous problems are implemented: random walk sampling, adaptive walk sampling, and Hilbert curve sampling. A total of 16 explainable landscape analysis features grouped into 4 categories are implemented comprising: - the **statistical features**: distribution of objective values (``distr_F``), distribution of violation values (``distr_V``), correlation of values (``corr``), correlation of ranks (``corr_ranks``), and variable importance (``X_imp``); - the **ranking-based features**: distribution of Pareto ranks (``distr_Par``) and distribution of Deb's ranks (``distr_Deb``); - the **distance-based features**: fitness distance correlation (``FDC``), violation distance correlation (``VDC``), rank distance correlation (``RDC``), pairwise distance correlation (``PDC``), and dispersion of the best solutions (``disp_best``); and - the **neighbourhood features**: neighbouring solutions' objective values correlation (``NFC``), neighbouring solutions' violation value of correlation (``NVC``), neighbouring solutions' ranks correlation (``NRC``), and neighbouring change in feasibility (``NCF``). The ``pyXla`` framework conveniently composes the 16 landscape analysis features and three sampling algorithms, thus simplifying explainable landscape analysis. The figure below is a high-level diagram of the framework. .. figure:: _static/img/pyxla-high-level.png :width: 100% :alt: High-level diagram of the pyXla framework A high-level diagram of the pyXla framework. **References** .. bibliography:: :filter: docname in docnames