Code commentary on the implementation of Tractable and Practicable Inducers

Introduction

Readers interested mainly in implementation should focus on the following sections of the paper: ‘Substrate structures’, ‘Shuffled history’, ‘Rolls’, ‘Computation time and representation space’, ‘Rolled alignment’, ‘Decomposition alignment’, ‘Computation of alignment’, ‘Tractable alignment-bounding’ and ‘Practicable alignment-bounding’.

The references to page numbers below are to the April 18 version (201803290540.PAP1) of the paper.

The layout of this commentary is different from the pages of the Haskell implementation of the Overview and Python implementation of the Overview. There the Haskell and Python code was embedded in the text from the Overview section of the paper. Here, by contrast, this page will follow the definition of inducers from the theoretical abstractions to the practicable implementations with text from the paper interspersed with links to pages containing Haskell code and Python code. In this way the overall flow of the discussion is less interrupted by specific implementation issues. The following text comes mainly from the ‘Overview’ section ‘Tractable and practicable aligned induction’ P84, the ‘Induction’ section P741 and the ‘Practicable alignment-bounding’ section P633.

Aligned induction

In the ‘Overview’ section ‘Aligned induction’ P73 it is shown that, given (i) necessary formal-abstract, (ii) mid-ideal transform and (iii) probable sample, the maximum likelihood estimate $\tilde{T}$ for the transform, $T$, is approximated by a maximisation of the derived alignment, \[ \begin{eqnarray} \tilde{T} &\in& \mathrm{maxd}(\{(M,\mathrm{algn}(A * M)) : \\ &&\hspace{6em}M \in \mathcal{T}_{U,V},~A^{\mathrm{X}} * M = (A * M)^{\mathrm{X}},~A = A * M * M^{\dagger A}\}) \end{eqnarray} \] This optimisation is intractable because the cardinality of the substrate transforms, $|\mathcal{T}_{U,V}|$, is factorial in the volume, $v = |V^{\mathrm{C}}|$. Consider how limited searches can be made tractable and then practicable.

Inducer preliminaries

This discussion defines (i) Computers, (ii) Sized cardinal substrate histograms and (iii) Inducers. It concludes with a definition of the literal derived alignment inducer.

Inducer preliminaries

Tractable inducers

Although the literal derived alignment inducer, $I_{z,\mathrm{a,l}}^{‘}$, is finitely computable, it is intractable. Section ‘Tractable alignment-bounding’ P568 discusses the various intractabilities and the classes of limits and constraints on the structures of more tractable inducers.

Tractable inducers

Practicable inducers

In order to investigate the constraints necessary to make tractable inducers practicable, section ‘Substrate models computation’ P634 considers how the limited-models non-overlapping infinite-layer substrate fuds, $\mathcal{F}_{\infty,U_A,V_A} \cap \mathcal{F}_{\mathrm{n}} \cap \mathcal{F}_{\mathrm{q}}$, may be constructed. Section ‘Optimisation’ P665 goes on to consider the explicit definitions of the (i) limited-models constraints, and (ii) layer-ordered limited-underlying limited-breadth infinite-layer substrate fuds trees, in order to define a finite search for a practicable inducer. Section ‘Implementation’ P725 concludes with a discussion of explicit implementations of practicable inducers.

Substrate models computation

As it is defined in the discussion on Tractable inducers, above, the summed alignment valency-density aligned fud decomposition inducer, $I_{z,\mathrm{Sd,D,F},\infty,\mathrm{n,q}}^{‘}$, is a computer that lacks explicit definition of (i) the limited-models constraints, $\mathcal{F}_{\mathrm{q}}$, (ii) the finite representations of the substrate models and their traversal, or (iii) the alignmenter, $I_{\mathrm{a}}$, or real approxer, $I_{\approx\mathbf{R}}$. The following section, ‘Substrate models computation’, considers explicit definitions so that it can be determined whether an implementation of the tractable inducer can be shown to be practicable given particular computation time and space resources.

Substrate models computation

Practicable shuffles

Practicable shuffles

Optimisation

If it is the case that the computation resources are insufficient, section ‘Optimisation’ then goes on to consider practicable inducers and the constraints necessary to implement them. Consideration is given to the effects of the additional constraints on the correlation of the maximum function between the practicable inducer and its corresponding tractable inducer.

Searchers and Optimisers

Optimisation

Tuple Optimisation

Fud Inducers

Fud Decomposition Inducers

Optimisation Summary

Implementation

The theoretic optimisation definitions are then given an explicit example implementation in the next section, ‘Implementation’. There the computation definition is less elegant, but more practical, because of (i) explicit recursion, (ii) defined ordering, (iii) caching of temporary values and structures, and (iv) the assignment of variable references or identifiers to replace partition variables.

Implementation

Model analysis

In the Haskell implementation of the Overview section Conversion to fud discusses the fud decomposition fud, $D^{\mathrm{F}} \in \mathcal{F}$, which is the intermediate fud used in the construction of the fud decomposition transform, $D^{\mathrm{T}}$. Creating the fud decomposition fud avoids the need to navigate the slices of the decomposition tree (Haskell) (Python).

Once the decomposition fud, $D^{\mathrm{F}}$, has been constructed it can be applied to a sample histogram without reduction to derived variables. The resultant histogram can then be searched to find the label entropy of sub-models. See Entropy and alignment and Conditional entropy optimisation.

Weather Forecast examples

There is a thread of Weather Forecast examples which demonstrate some of the concepts in the sections above,

Tuple partition search

Contracted decrementing linear non-overlapping fuds list maximiser initial subset

Contracted decrementing linear non-overlapping fuds list maximiser neighbourhood function

Contracted decrementing linear non-overlapping fuds list maximiser

Maximum-roll-by-derived-dimension contracted decrementing linear non-overlapping fuds tree maximiser

Limited-underlying tuple set list maximiser

Limited-layer limited-underlying limited-breadth fud tree searcher

Highest-layer limited-layer limited-underlying limited-breadth fud tree searcher

Maximum-roll-by-derived-dimension limited-layer limited-underlying limited-breadth fud tree searcher

Limited-derived derived variables set list maximiser

Practicable shuffle content alignment valency-density fud inducer

Practicable highest-layer shuffle content alignment valency-density fud inducer

Practicable highest-layer summed shuffle content alignment valency-density fud decomposition inducer

Highest-layer fud decomper

Highest-layer excluded-self maximum-roll-by-derived-dimension fud decomper


top