Given a representation \(\rho : G \to GL(V)\), it is often desirable to find a basis for \(V\) that block diagonalizes each \(\rho(g)\) with the block sizes being as small as possible. This speeds up matrix algebra operations, since they can now be done block-wise.
‣ BlockDiagonalBasisOfRepresentation ( rho ) | ( function ) |
Returns: Basis for \(V\) that block diagonalizes \(\rho\).
Let \(G\) have irreducible representations \(\rho_i\), with dimension \(d_i\) and multiplicity \(m_i\). The basis returned by this operation gives each \(\rho(g)\) as a block diagonal matrix which has \(m_i\) blocks of size \(d_i \times d_i\) for each \(i\).
‣ BlockDiagonalRepresentation ( rho ) | ( function ) |
Returns: Representation of \(G\) isomorphic to \(\rho\) where the images \(\rho(g)\) are block diagonalized.
This is just a convenience operation that uses BlockDiagonalBasisOfRepresentation
(5.1-1) to calculate the basis change matrix and applies it to put \(\rho\) into the block diagonalised form.
‣ REPN_ComputeUsingMyMethod ( rho ) | ( attribute ) |
Returns: A record in the same format as REPN_ComputeUsingSerre
(5.3-4)
Computes the same values as REPN_ComputeUsingSerre
(5.3-4), taking the same options. The heavy lifting of this method is done by LinearRepresentationIsomorphism
(2.1-1), where there are some further options that can be passed to influence algorithms used.
gap> G := SymmetricGroup(4);; gap> irreps := IrreducibleRepresentations(G);; gap> rho := DirectSumOfRepresentations([irreps[4], irreps[5]]);; gap> # Jumble rho up a bit so it's not so easy for the library. > A := [ [ 3, -3, 2, -4, 0, 0 ], [ 4, 0, 1, -5, 1, 0 ], [ -3, -1, -2, 4, -1, -2 ], > [ 4, -4, -1, 5, -3, -1 ], [ 3, -2, 1, 0, 0, 0 ], [ 4, 2, 4, -1, -2, 1 ] ];; gap> rho := ComposeHomFunction(rho, B -> A^-1 * B * A);; gap> # We've constructed rho from two degree 3 irreps, so there are a few > # things we can check for correctness: > decomposition := REPN_ComputeUsingMyMethod(rho);; gap> # Two distinct irreps, so the centralizer has dimension 2 > Length(decomposition.centralizer_basis) = 2; true gap> # Two distinct irreps i.e. two invariant subspaces > Length(decomposition.decomposition) = 2; true gap> # All subspaces are dimension 3 > ForAll(decomposition.decomposition, Vs -> Length(Vs) = 1 and Dimension(Vs[1]) = 3); true gap> # And finally, check that the block diagonalized representation > # computed is actually isomorphic to rho: > AreRepsIsomorphic(rho, decomposition.diagonal_rep); true
‣ REPN_ComputeUsingMyMethodCanonical ( rho ) | ( attribute ) |
Returns: A record in the same format as REPN_ComputeUsingMyMethod
(5.2-1).
Performs the same computation as REPN_ComputeUsingMyMethod
(5.2-1), but first splits the representation into canonical summands using CanonicalDecomposition
(5.3-1). This might reduce the size of the matrices we need to work with significantly, so could be much faster.
If the option parallel
is given, the decomposition of canonical summands into irreps is done in parallel, which could be much faster.
gap> # This is the same example as before, but splits into canonical > # summands internally. It gives exactly the same results, up to > # isomorphism. > other_decomposition := REPN_ComputeUsingMyMethodCanonical(rho);; gap> Length(other_decomposition.centralizer_basis) = 2; true gap> Length(other_decomposition.decomposition) = 2; true gap> ForAll(other_decomposition.decomposition, Vs -> Length(Vs) = 1 and Dimension(Vs[1]) = 3); true gap> AreRepsIsomorphic(rho, other_decomposition.diagonal_rep); true
Note: all computation in this section is actually done in the function REPN_ComputeUsingSerre
(5.3-4), the other functions are wrappers around it.
‣ CanonicalDecomposition ( rho ) | ( function ) |
Returns: List of vector spaces \(V_i\), each \(G\)-invariant and a direct sum of isomorphic irreducibles. That is, for each \(i\), \(V_i \cong \oplus_j W_i\) (as representations) where \(W_i\) is an irreducible \(G\)-invariant vector space.
Computes the canonical decomposition of \(V\) into \(\oplus_i\;V_i\) using the formulas for projections \(V \to V_i\) due to Serre. You can pass in the option irreps
with a list of irreps of \(G\), and this will be used instead of computing a complete list ourselves. If you already know which irreps will appear in \(\rho\), for instance, this will save time.
gap> # This is the trivial group > G := Group(());; gap> # The trivial group has only one representation per degree, so a > # degree d representation decomposes into a single canonical summand > # containing the whole space > rho := FuncToHom@RepnDecomp(G, g -> IdentityMat(3));; gap> canonical_summands_G := CanonicalDecomposition(rho); [ ( Cyclotomics^3 ) ] gap> # More interesting example, S_3 > H := SymmetricGroup(3);; gap> # The standard representation: a permutation to the corresponding > # permutation matrix. > tau := FuncToHom@RepnDecomp(H, h -> PermutationMat(h, 3));; gap> # Two canonical summands corresponding to the degree 2 and > # trivial irreps (in that order) > List(CanonicalDecomposition(tau), Dimension); [ 2, 1 ]
‣ IrreducibleDecomposition ( rho ) | ( function ) |
Returns: List of vector spaces \(W_j\) such that \(V = \oplus_j W_j\) and each \(W_j\) is an irreducible \(G\)-invariant vector space.
Computes the decomposition of \(V\) into irreducible subprepresentations.
gap> # The trivial group has 1 irrep of degree 1, so rho decomposes into 3 > # lines. > irred_decomp_G := IrreducibleDecomposition(rho); [ rec( basis := [ [ 1, 0, 0 ] ] ), rec( basis := [ [ 0, 1, 0 ] ] ), rec( basis := [ [ 0, 0, 1 ] ] ) ] gap> # The spaces are returned in this format - explicitly keeping the > # basis - since this basis block diagonalises rho into the irreps, > # which are the smallest possible blocks. This is more obvious with > # H. > irred_decomp_H := IrreducibleDecomposition(tau); [ rec( basis := [ [ 1, 1, 1 ] ] ), rec( basis := [ [ 1, E(3), E(3)^2 ], [ 1, E(3)^2, E(3) ] ] ) ] gap> # Using the basis vectors given there block diagonalises tau into > # the two blocks corresponding to the two irreps: > nice_basis := [ [ 1, 1, 1 ], [ 1, E(3), E(3)^2 ], [ 1, E(3)^2, E(3) ] ];; gap> tau_diag := ComposeHomFunction(tau, X -> nice_basis^-1 * X * nice_basis); [ (1,2,3), (1,2) ] -> [ [ [ 1, 0, 0 ], [ 0, E(3), 0 ], [ 0, 0, E(3)^2 ] ], [ [ 1, 0, 0 ], [ 0, 0, E(3)^2 ], [ 0, E(3), 0 ] ] ]
‣ IrreducibleDecompositionCollected ( rho ) | ( function ) |
Returns: List of lists \(V_i\) of vector spaces \(V_{ij}\) such that \(V = \oplus_i \oplus_j V_{ij}\) and \(V_{ik} \cong V_{il}\) for all \(i\), \(k\) and \(l\) (as representations).
Computes the decomposition of \(V\) into irreducible subrepresentations, grouping together the isomorphic subrepresentations.
‣ REPN_ComputeUsingSerre ( rho ) | ( attribute ) |
Returns: A record, in the format described below
This function does all of the computation and (since it is an attribute) saves the results. Doing all of the calculations at the same time ensures consistency when it comes to irrep ordering, block ordering and basis ordering. There is no canonical ordering of irreps, so this is crucial.
irreps is the complete list of irreps involved in the direct sum decomposition of rho, this can be given in case the default (running Dixon's algorithm) is too expensive, or e.g. you don't want representations over Cyclotomics.
The return value of this function is a record with fields:
basis
: The basis that block diagonalises \(\rho\), see BlockDiagonalBasisOfRepresentation
(5.1-1).
diagonal_rep
: \(\rho\), block diagonalised with the basis above. See BlockDiagonalRepresentation
(5.1-2)
decomposition
: The irreducible \(G\)-invariant subspaces, collected according to isomorphism, see IrreducibleDecompositionCollected
(5.3-3)
centralizer_basis
: An orthonormal basis for the centralizer ring of \(\rho\), written in block form. See CentralizerBlocksOfRepresentation
(6.1-1)
Pass the option parallel
for the computations per-irrep to be done in parallel.
Pass the option irreps
with the complete list of irreps of \(\rho\) to avoid recomputing this list (could be very expensive)
gap> # Does the same thing we have done in the examples above, but all in > # one step, with as many subcomputations reused as possible > REPN_ComputeUsingSerre(tau); rec( basis := [ [ 1, 1, 1 ], [ 1, E(3), E(3)^2 ], [ 1, E(3)^2, E(3) ] ], centralizer_basis := [ [ [ [ 1 ] ], [ [ 0, 0 ], [ 0, 0 ] ] ], [ [ [ 0 ] ], [ [ 1, 0 ], [ 0, 1 ] ] ] ], decomposition := [ [ rec( basis := [ [ 1, 1, 1 ] ] ) ], [ ], [ rec( basis := [ [ 1, E(3), E(3)^2 ], [ 1, E(3)^2, E(3) ] ] ) ] ], diagonal_rep := [ (1,2,3), (1,2) ] -> [ [ [ 1, 0, 0 ], [ 0, E(3), 0 ], [ 0, 0, E(3)^2 ] ], [ [ 1, 0, 0 ], [ 0, 0, E(3)^2 ], [ 0, E(3), 0 ] ] ] ) gap> # You can also do the computation in parallel: > REPN_ComputeUsingSerre(tau : parallel);; gap> # Or specify the irreps if you have already computed them: > irreps_H := IrreducibleRepresentations(H);; gap> REPN_ComputeUsingSerre(tau : irreps := irreps_H);;
generated by GAPDoc2HTML