Monthly Archives: November 2018

Investigations into infinite metric bases (Part 1/2)

Let (X,d) be a metric space. Let A \subseteq X be such that for all x,y \in X the following implication holds:

    \[\left(\forall a \in A \ d(x,a) = d(a,y) \right) \implies x=y\]

then we say A is a metric base for (X,d). A sequence \alpha in X is called a dense sequence iff \mathrm{Im}\ \alpha is a dense set. A metric base A is a finite metric base iff A is a finite set. Finding a finite metric base (if it exists) in a general metric space is already an interesting challenge and this will be explored in more detail in another post. Some information about how finite metric bases relate to computability can be found in another post. The question of existence of infinite metric bases however seems a bit less challenging due to the following easy result.

Proposition: Let (X,d) be a metric space. Let \alpha be a dense sequence in X. Then \mathrm{Im}\ \alpha is a metric base for (X,d).

Proof. Let \alpha be a dense sequence and set A = \mathrm{Im}\ \alpha. Let x,y\in X. Suppose d(x,a) = d(y,a) for all a \in A. Since \alpha is dense, there exists a sequence (\beta_i) in A such that \lim_i \beta_i = x. This is equivalent to \lim_i d(\beta_i, x) =0. Since d(x,a) = d(y,a) for all a \in A we have \lim_i d(\beta_i, y) = 0.

Set x_i=0, y_i =d(x,y) and z_i = d(x,\beta_i) + d(\beta_i, y) for all i \in \mathbb{N}. From the triangle inequality and non-negativity of the metric we now have x_i \leq y_i \leq z_i for all i \in \mathbb{N}.

Since \lim_i x_i = 0 and \lim_i z_i = 0, we have by the squeeze theorem \lim_i y_i = \lim_i d(x,y) = 0, therefore d(x,y) =0 which implies x=y. We conclude that A is a metric base for (X,d).

Q.E.D.

Obviously, by set inclusion we have that every A\subseteq X that contains a metric base is a metric base. What about  metric bases which are not dense in X and also do not contain a finite metric base? Formally, we have the following question:

Question: Does there exist a metric space (X,d) such that it has an infinite metric base A with the following properties:

  1. A is not dense;
  2. for all finite metric bases B of (X,d) we have B \not \subseteq A?

More about that in the following posts.

Centrosymmetric QR decomposition and its connection to split-complex matrices

Introduction

Back in 2015 I published an article [1] about the QR decomposition of a centrosymmetric real matrix. In 2016 I started thinking about the meaning of this decomposition and centrosymmetric matrices in general. I discovered that centrosymmetric matrices have a natural interpretation as split-complex matrices and even more, that the centrosymmetric QR decomposition of a matrix actually corresponds to a QR decompositon of a split-complex matrix for which the original matrix is it representation. In a certain sense, the centrosymmetric QR decomposition introduced in [1] of a real square matrix of order 2n is equivalent to a QR decomposition of a corresponding split-complex square matrix of order n.  All these notions will be made precise in the following sections. This blog post is based on my own preprint.

Matrix representations

If V is a finite-dimensional vector space then we denote by \mathcal{L}(V) the set of all linear transformations V \rightarrow V. Recall that if V is an n-dimensional vector space and \mathcal{B} = (v_1,\dots,v_n) an ordered basis for V, then every linear transformation T : V \rightarrow V has an n \times n matrix representation with respect to \mathcal{B} denoted [T]_\mathcal{B}. Further, for any two linear transformations S, T: V \rightarrow V we have [S \circ T]_\mathcal{B} = [S]_\mathcal{B} [T]_\mathcal{B}. The standard ordered basis for \mathbb{R}^n i.e. the basis (e_1,\dots,e_n) is defined as e_{i,j} = 1 if i=j and e_{i,j} =0 otherwise.

An algebra \mathfrak{A} is an ordered pair (A, \cdot) such that A is a vector space over a field K and \cdot: A \times A \rightarrow A is a bilinear mapping called multiplication.

Let \mathfrak{A} = (A, \cdot) be an algebra. A representation of \mathfrak{A} over a vector space V is a map \phi : A \rightarrow \mathcal{L}(V) such that

\phi(a_1 \cdot a_2) = \phi(a_1) \circ \phi(a_2) for all a_1, a_2 \in A.

Let M_n(\mathbb{R}) denote the set of n \times n real matrices. If V is an n-dimensional vector space and \mathcal{B} = (v_1,\dots,v_n) an ordered basis for V then every linear transformation T : V\rightarrow V has a matrix representation [T]_\mathcal{B} \in M_n(\mathbb{R}). For each a \in A we have \phi(a) \in \mathcal{L}(V). Since V is n-dimensional, we have and ordered basis \mathcal{B} and [\phi(a)]_\mathcal{B} \in M_n(\mathbb{R}). A matrix representation of \mathfrak{A} with respect to \mathcal{B}} is a map \phi_\mathcal{B} : A \rightarrow M_n(\mathbb{R}) such that \phi_\mathcal{B}(a) = [\phi(a)]_\mathcal{B} for all a \in A. Further, we have \phi_\mathcal{B}(a_1 \cdot a_2) = \phi(a_1)_\mathcal{B}\cdot \phi(a_2)_\mathcal{B} for all a_1,a_2 \in A. These are all well known notions from representation theory, for further information, one can consult one of the standard textbooks, for example see [3].

Algebra of split-complex numbers

Let \Cs = (\mathbb{R}^2, \cdot), where \cdot : \mathbb{R}^2 \times \mathbb{R}^2 \rightarrow \mathbb{R}^2 is given by

(1)   \begin{align*}(a, b) \cdot (c, d) &= (ac + bd, ad + bc)\end{align*}

for all (a, b), (c, d) \in \mathbb{R}^2. It is straightforward to verify that \mathfrak{D} is
an algebra. This is the well known algebra of  split-complex numbers. The split-complex numbers are also sometimes known as hyperbolic numbers.
Similarly as for the complex numbers, each real number x \in \mathbb{R} can be identified with the pair (x, 0).
With this correspondence, the pair j \equiv (0, 1) has the property j^2 = +1 and j \not = \pm 1. Due to this property, j is called the hyperbolic unit.
Since (x, y) = (x, 0) + (0, 1) \cdot (0, y), in the following we shall denote a pair (x, y) simply with x + j y. The conjugate of z = a + j b is defined as z^* = a - j b.

For a hyperbolic number z = a + j b we define the real part as \Real(z) = a and hyperbolic part as \Hyp(z) = b.
For the module we set |z| = zz^* = a^2 - b^2 and we have |w \cdot z| = |w|\cdot|z| for all w, z \in \Cs.
For an extensive overview of the theory of hyperbolic numbers as well of their usefulness in Physics one can check the literature, for example [2]. For the rest of this blog post, we shall refer to these numbers as split-complex numbers.

Centrosymmetric representation of split-complex matrices

By \M{n} we denote the set of all n \times n split-complex matrices i.e. matrices in which entries are split-complex numbers. Note that Z \in \M{n} if and only if there exist n \times n real matrices A, B such that Z = A + jB. If A is a matrix then its transpose is defined as a_{i,j}^T = a_{j,i} for all i,j and is denoted with A^T. In the following we denote by I_n the n \times n identity matrix and by O_n the n \times n zero matrix. Let J \in M_n(\mathbb{R}) be defined as J = \mathrm{antidiag}(1,\dots,1) for each n. Note that J^2 = I. A matrix R \in M_n(\mathbb{R}) is upper-triangular if r_{i,j} = 0 for all i > j.
A real matrix R is centrosymmetric if RJ = JR. An overview of centrosymmetric matrices can be found in [4]. We denote by \CsMat{n} the set of all n \times n centrosymmetric real matrices.

For the algebra \Cs of split-complex numbers the well-known matrix representation {cs} : \Cs \rightarrow M_{2}(\mathbb{R}) with respect to (e_1, e_2) is given by

(2)   \begin{equation*}{cs}(z) = \begin{bmatrix} \Real(z) & \Hyp(z) \\ \Hyp(z) & \Real(z) \end{bmatrix},\qquad \forall z \in \Cs.\end{equation*}


It is straightforward to check that for all w, z \in \Cs we have \CsEmb{w \cdot z} = \CsEmb{w} \CsEmb{z}.

Further, on the vector space M_n(\Cs) there is a natural multiplication operation \cdot : M_n(\Cs) \times M_n(\Cs) \rightarrow M_n(\Cs) given by

(3)   \begin{equation*}(A + jB) \cdot (C + jD) := (AC + BD) + j(AD + BC)\end{equation*}


for all A,B,C,D \in M_n(\mathbb{R}). It is easy to verify that
(M_n(\Cs), \cdot) is an algebra. In the following we refer to this algebra as the algebra of split-complex (square) matrices and denote it with \mathfrak{M}_n.

Note that in the following whenever we have two matrices A,B \in M_n(\Cs), their product shall explicitly be written with a dot '\cdot', e.g. A \cdot B to indicate multiplication defined in (3). Otherwise, if A,B \in M_n(\mathbb{R}) we simply write AB.

To state and prove our main result, we shall need the following well known characterization of centrosymmetric matrices.

Proposition 1:
Let A \in M_{2n}(\mathbb{R}). Then A \in \CsMat{2n} if and only if there exist B,C \in M_{n}(\mathbb{R}) such that

(4)   \begin{equation*}A = \begin{bmatrix}B & C J \\JC & J B J\end{bmatrix}.\end{equation*}

Proof:
Suppose

    \[A =\begin{bmatrix}X & Y \\W & Z\end{bmatrix}.\]


Since A is centrosymmetric, we have AJ = JA, or equivalently, in block-form
*** QuickLaTeX cannot compile formula:
\[\begin{bmatrix}\ & J \\J & \\end{bmatrix}\begin{bmatrix}X & Y \\W & Z\end{bmatrix} =\begin{bmatrix}X & Y \\W & Z\end{bmatrix}\begin{bmatrix}\ & J \\J & \\end{bmatrix}\]

*** Error message:
\begin{bmatrix} on input line 17 ended by \end{equation*}.
leading text: ...begin{bmatrix}\ & J \\J & \\end{bmatrix}\]
Missing $ inserted.
leading text: ...begin{bmatrix}\ & J \\J & \\end{bmatrix}\]
Missing } inserted.
leading text: ...begin{bmatrix}\ & J \\J & \\end{bmatrix}\]
Missing \cr inserted.
leading text: ...begin{bmatrix}\ & J \\J & \\end{bmatrix}\]
Missing $ inserted.
leading text: ...begin{bmatrix}\ & J \\J & \\end{bmatrix}\]
Missing } inserted.
leading text: ...begin{bmatrix}\ & J \\J & \\end{bmatrix}\]
Missing \right. inserted.
leading text: ...begin{bmatrix}\ & J \\J & \\end{bmatrix}\]
Improper \prevdepth.
leading text: \end{document}
Missing $ inserted.
leading text: \end{document}
Missing } inserted.
leading text: \end{document}


This is equivalent to

    \[\begin{bmatrix}JW & JZ \\JX & JY\\end{bmatrix} =\begin{bmatrix}YJ & XJ \\ZJ & WJ\end{bmatrix}\]


We now have Z = JXJ and Y = JWJ, so

    \[A = \begin{bmatrix}X & JWJ \\W & JXJ\end{bmatrix}\]


Now, by choosing C = JW and B = X and from the fact J^2 = I it follows that A has the form (4).

 

Conversely, suppose A has the form (4). It can easily be shown by block-matrix multiplication that AJ = JA, hence A is centrosymmetric.

QED.

The map {cs} : \M{n} \rightarrow CM_{2n}(\mathbb{R}) defined as

(5)   \begin{equation*}{cs}(A + jB) = \begin{bmatrix}A & BJ \\JB & JAJ\end{bmatrix}\end{equation*}


is a matrix representation of \mathfrak{M}_n. We call the representation {cs} the standard centrosymmetric matrix representation of \mathfrak{M}_n.

Proof:
Let W \in \M{n} and Z \in \M{n} be such that W = A + j B and Z = C + j D.
We now have

    \begin{align*}\CsEmb{W} \CsEmb{Z} &= \begin{bmatrix} A & B J \\ J B & J A J \end{bmatrix} \begin{bmatrix} C & D J \\ J D & J C J \end{bmatrix} \\ &= \begin{bmatrix} A C + B D & (A D + B C) J \\ J (A D + B C) & J (A C + B D) J\end{bmatrix} = \CsEmb{W \cdot Z}\end{align*}


which proves the claim.

QED.

Proposition.
Let Q \in \M{n}. Then {cs}(Q^T) = {cs}(Q)^T.

Proof.
Let Q = A + jB. Then

    \[{cs}(Q^T) = {cs}(A^T + jB^T) = \begin{bmatrix}A^T & B^T J \\J B^T & J A^T J\end{bmatrix}.\]


On the other hand, keeping in mind that J^T = J we have

    \[{cs}(Q)^T = \begin{bmatrix}A & B J \\J B & JAJ\end{bmatrix}^T =\begin{bmatrix}A^T & (JB)^T \\(BJ)^T & (JAJ)^T\end{bmatrix}= \begin{bmatrix}A^T & B^T J \\J B^T & J A^T J\end{bmatrix}.\]


Hence, {cs}(Q^T) = {cs}(Q)^T.

QED.

Proposition.
The map {cs} is a bijection.

Proof.
Injectivity. Let Z = A + jB and W = C + jD and W \not = Z. From this,
it follows that A \not = C or B \not = D. Assume that A \not = C. Then

    \begin{align*}\CsEmb{Z} = \begin{bmatrix}A & BJ \\JB & JAJ\end{bmatrix} & \quad &\CsEmb{W} = \begin{bmatrix}C & DJ \\JD & JCJ\end{bmatrix}.\end{align*}


Since A \not = C we have \CsEmb{Z} \not = \CsEmb{W}. Let now B \not = D and assume that \CsEmb{Z} = \CsEmb{W}. Then from \CsEmb{Z} = \CsEmb{W} it follows JB = JD. Now multiplying JB = JD with J from the left implies B=D, which is a contradiction.
We conclude that \CsEmb{\cdot} is injective.

Surjectivity. Let A \in \CsMat{2n}. By proposition 1 we can find matrices B and C such that (4) holds. But then \CsEmb{B + jC} = A and since A was arbitrary, we conclude that {cs} is surjective.
Now, injectivity and surjectivity of {cs} imply by definition that {cs} is a bijection.

QED.

Correspondence of QR decompositions

Definition:
Let A \in M_n(\Cs). A pair (Q, R) with Q, R \in M_n(\Cs) is a QR decomposition of A over \Cs if the following holds:

  1. Q is orthogonal, i.e. Q^T \cdot Q = Q \cdot Q^T = I,
  2. R is upper-triangular,
  3. A = Q \cdot R.

The notion of a m\times n double-cone matrix was introduced by the author in [1].
Here we state the definition in block-form for the case of H \in CM_{2n}(\mathbb{R}).

Definition:
Let H \in CM_{2n}(\mathbb{R}). Then H is a double-cone matrix iff there exist A, B \in M_n(\mathbb{R}) both upper-triangular such that

    \[H = \begin{bmatrix}A & BJ \\JB & JAJ\end{bmatrix}.\]

Definition:
Let Z \in \CsMat{n}. A pair (W, Y), with W, Y \in \CsMat{n} is a centrosymmetric QR decomposition of Z if the following holds:

  1. W is orthogonal matrix,
  2. Y is double-cone matrix,
  3. Z = WY.

The algorithm to obtain an approximation of a centrosymmetric QR decomposition of a given centrosymmetric matrix A was given in [1].

The following theorem provides one interpretation of the centrosymmetric QR decomposition, in the case of square centrosymmetric matrices of even order by establishing the equivalence of their centrosymmetric QR decomposition with the QR decomposition of the corresponding split-complex matrix.

Theorem 1 (QR decomposition correspondence):
Let A \in \M{n}. Then (Q,R) \in \M{n} \times \M{n} is a QR decomposition of A if and only if

    \[(\CsEmb{Q}, \CsEmb{R}) \in \CsMat{2n} \times \CsMat{2n}\]

is a centrosymmetric QR decomposition of \CsEmb{A} \in \CsMat{2n}.

Proof.
Let (Q, R) \in \M{n} \times \M{n} be a QR decomposition of A.
Let W = \CsEmb{Q} and Y = \CsEmb{R}. We have

    \begin{align*}W = \begin{bmatrix}Q_1 & Q_2 J \\J Q_2 & J Q_1 J\end{bmatrix} & \ &Y = \begin{bmatrix}R_1 & R_2 J \\J R_2 & J R_1 J\end{bmatrix}\end{align*}


Since Q^T \cdot Q = I it follows that \CsEmb{Q^T \cdot Q} = \CsEmb{Q^T} \CsEmb{Q} = \CsEmb{Q}^T \CsEmb{Q} = \CsEmb{I}.
From this we have \CsEmb{Q}^T \CsEmb{Q} = I i.e. W^T W = I hence W is orthogonal. Since R is upper-triangular and R = R_1 + j R_2, then by definition we have that both R_1 and R_2 are upper-triangular. Further, \CsEmb{R} is centrosymmetric by definition. From this it follows that Y is centrosymmetric double-cone.
Finally, we have \CsEmb{A} = \CsEmb{Q \cdot R} = \CsEmb{Q} \CsEmb{R}. Hence, (\CsEmb{Q}, \CsEmb{R}) is
a centrosymmetric QR decomposition of \CsEmb{A}.

Conversely, let (W, Y) = (\CsEmb{Q}, \CsEmb{R}). If (W, Y) is a centrosymmetric QR decomposition of \CsEmb{A} then \CsEmb{A} = WY where W is centrosymmetric and orthogonal and Y is a double-cone matrix.

From the fact that W is centrosymmetric we have (by Proposition 1) that

    \[W = \begin{bmatrix}W_1 & W_2 J \\J W_2 & J W_1 J\end{bmatrix}\]


Now the property of W being orthogonal i.e. the condition W^T W = I implies

(6)   \begin{equation*}\begin{bmatrix}W_1^T W_1 + W_2^T W_2 & (W_1^T W_2 + W_2^T W_1) J \\J(W_2^T W_1 + W_1^T W_2) & J(W_2^T W_2 + W_1^T W_1) J\end{bmatrix}= \begin{bmatrix}I & \ \\\ & I\end{bmatrix}.\end{equation*}


On the other hand, we have

(7)   \begin{align*}Q = W_1 + j W_2 & \ & R = Y_1 + j Y_2\end{align*}


First we prove that Q is orthogonal. From (6) we obtain

    \begin{align*}Q^T \cdot Q &= (W_1 + j W_2)^T \cdot (W_1 + j W_2) \\&= (W_1^T + j W_2^T) \cdot (W_1 + j W_2) \\& = W_1^T W_1 + W_2^T W_2 + j (W_1^T W_2 + W_2^T W_1) \\& = I + jO = I.\end{align*}


The matrix Y is centrosymmetric and double-cone which implies

    \[Y = \begin{bmatrix}Y_1 & Y_2 J \\J Y_2 & J Y_1 J\end{bmatrix}\]


where both Y_1 and Y_2 are upper-triangular. This now implies that {cs}^{-1}(Y) = Y_1 + j Y_2 is upper-triangular.

Finally, let us prove that QR = A. We have

    \[Q \cdot R = {cs}^{-1} (W) {cs}^{-1} (Y) = {cs}^{-1}(WY) = {cs}^{-1}(\CsEmb{A}) = A.\]


We conclude that (Q, R) is a QR decomposition of A.
QED.

Example:
Let

    \[A = \begin{bmatrix}1 + 2j & 2 + 3j \\3 + 4j & 4 + 5j\end{bmatrix}.\]


Note that A = W + jZ where

(8)   \begin{align*}W = \begin{bmatrix}1 & 2 \\3 & 4\end{bmatrix} & \mbox{ and } Z = \begin{bmatrix}2 & 3 \\4 & 5\end{bmatrix}.\end{align*}


We have

    \[{cs}(A) = {cs}(W + jZ) = \begin{bmatrix}W & ZJ \\JZ & JWJ\end{bmatrix}= \begin{bmatrix}1 & 2 & 3 & 2 \\3 & 4 & 5 & 4 \\4 & 5 & 4 & 3 \\2 & 3 & 2 & 1\end{bmatrix}\]


By applying the \textsc{CentrosymmetricQR} algorithm from [1] to {cs}(A) we obtain the approximations:

    \begin{align*}Q \approx \begin{bmatrix}-0.156594 & -0.106019 & -0.813126 & 0.550513 \\0.106019 & -0.156594 & 0.550513 & 0.813126 \\0.813126 & 0.550513 & -0.156594 & 0.106019 \\0.550513 & -0.813126 & -0.106019 & -0.156594 \\\end{bmatrix}\\\\\R \approx \begin{bmatrix}4.51499 & 5.82806 & 4.41384 & 3.10078 \\0 & -0.525226 & -0.525226 & 0 \\0 & -0.525226 & -0.525226 & 0 \\3.10078 & 4.41384 & 5.82806 & 4.51499 \\\end{bmatrix}\end{align*}

Applying {cs}^{-1} to Q and R yields:

    \begin{align*}{cs}^{-1}(Q) \approx \begin{bmatrix}-0.156594 & -0.106019 \\0.106019 & -0.156594\end{bmatrix} + j \begin{bmatrix}0.550513 & -0.813126 \\0.813126 & 0.550513\end{bmatrix} \\& \ & \\{cs}^{-1}(R) \approx \begin{bmatrix}4.51499 & 5.82806 \\0 & -0.525226\end{bmatrix} + j\begin{bmatrix}3.10078 & 4.41384 \\0 & -0.525226 \\\end{bmatrix}\end{align*}


with A \approx {cs}^{-1}(Q) \cdot {cs}^{-1}(R). Now, from Theorem 1 we conclude that ({cs}^{-1}(Q), {cs}^{-1}(R)) is an approximation of a QR decomposition of A.

Conclusion

We introduced the standard centrosymmetric representation {cs} for split-complex matrices. Using this representation we proved that a QR decomposition of a square split-complex matrix A can be obtained by calculating the centrosymmetric QR decomposition introduced by the author in [1] of its centrosymmetric matrix representation {cs}(A).

References

  1.  Burnik, KonradA structure-preserving QR factorization for centrosymmetric real matrices,
    Linear Algebra and its Applications 484(2015) 356 - 378 
  2. Catoni, Francesco and Boccaletti, Dino and Cannata, Roberto and Catoni, Vincenzo and Zampeti, Paolo, Hyperbolic Numbers. Geometry of Minkowski Space-Time Springer Berlin Heidelberg Berlin, Heidelberg, (2011) 3–23 ISBN: 978-3-642-17977-8
  3. Curtis, Charles W.; Reiner, Irving. Representation Theory of Finite Groups and Associative Algebras, John Wiley & Sons (Reedition2006 by AMS Bookstore), (1962) ISBN 978-0-470-18975
  4. James R. Weaver. Centrosymmetric (Cross-Symmetric) Matrices,Their Basic Properties, Eigenvalues, and Eigenvectors, The American Mathematical Monthly, Mathematical Association of America, (1985) 10-92, 711–717 ISSN: 00029890, 19300972 doi:10.2307/2323222
QR_decomposition_of_split_complex_matric

Copyright © 2018, Konrad Burnik

Two questions related to computability structures on the unit square

The following questions were proposed to me by professor Zvonko Iljazović.

Let I = [0,1]. We define the metric d_\infty be the metric defined as

    \[ d_\infty((x_1,x_2), (y_1,y_2)) = \max (|x_1 - y_1|, |x_2 - y_2|).\]

for all (x_1,x_2), (y_1,y_2) \in I^2. Let \mathcal{M} = (I^2, d_\infty).

Let a = (0,0) and b = (0,1). Suppose (x_i) is a sequence in \mathcal{M} such that the sequences d(x_i, a) and d(x_i,b) are computable sequences  \mathbb{N} \rightarrow \mathbb{R}? Does the sequence (x_i) need to be computable in the sense that its components are computable sequences \mathbb{N} \rightarrow \mathbb{R}?

Rendered by QuickLaTeX.com

Let w = (1, \gamma) where 0 < \gamma < 1 is an incomputable real. Then d_\infty(x_i, a) = d_\infty(x_i, b)=1 however, (x_i) is not a computable sequence.

Now the second question. Let a=(0,0), b=(0,1) and c=(1,0). Let (x_i) be a sequence such that d_\infty(x_i,a), d_\infty(x_i,b) and d_\infty(x_i,c) are computable sequences \mathbb{N} \rightarrow \mathbb{R}. Does the sequence (x_i) need to be computable?

The answer is yes. The proof is available in our upcoming publication "Dense Computability Structures".

Proposition:
The metric space \mathcal{M} has at least two maximal computability structures in which points a=(0,0) and b=(0,1) are computable.

Proof:
Let (q_j) be a computable sequence in \mathbb{R}^2 such that \mathrm{Im}\ q = \mathbb{Q}^2 \cap \mathcal{M}. Let S_q be a computability structure induced by q. Since S_q is separable, it is also maximal.

Let c = (1,\gamma) where \gamma \in \langle 0,1 \rangle is an incomputable real. Then the finite sequence (a,b,c) has the property d_\infty(a,b) = d_\infty(a,c) = d_\infty(b,c) = 1 hence by definition (a,b,c) is an effective finite sequence.
Therefore,

    \[T = \{(a,a,a,\dots), (b,b,b,\dots), (c,c,c,\dots)\}\]

is a computability structure and there exists a maximal computability structure M such that T \subseteq M and a,b,c are computable points in M.

On the other hand, the point c is not computable in S_q since that would contradict the fact that w is a non-computable real. This is equivalent to the fact that (c,c,c,\dots) \not \in S_q. Therefore, M \not = S_q. From this we conclude that M and S_q are two maximal computability structures on \mathcal{M} in which points a and b are computable.